How do I copy files that need root access with scp?












150















I have an Ubuntu server to which I am connecting using SSH.



I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.



Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.



But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.



Do you know how i could deal with this ?



If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.










share|improve this question

























  • This seems more like a question for your virtual server provider, or for the putty or winscp developers.

    – dobey
    Oct 29 '12 at 21:17






  • 6





    @dobey you obviusly wrong , it is about ubuntu privileges !

    – Dimitris Sapikas
    Oct 30 '12 at 10:14






  • 6





    Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

    – Sergey
    Oct 30 '12 at 22:02











  • Copying protected files between servers in one line? should help.

    – Gilles
    Oct 30 '12 at 23:04











  • I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

    – JanezKranjski
    Feb 21 '18 at 4:55


















150















I have an Ubuntu server to which I am connecting using SSH.



I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.



Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.



But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.



Do you know how i could deal with this ?



If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.










share|improve this question

























  • This seems more like a question for your virtual server provider, or for the putty or winscp developers.

    – dobey
    Oct 29 '12 at 21:17






  • 6





    @dobey you obviusly wrong , it is about ubuntu privileges !

    – Dimitris Sapikas
    Oct 30 '12 at 10:14






  • 6





    Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

    – Sergey
    Oct 30 '12 at 22:02











  • Copying protected files between servers in one line? should help.

    – Gilles
    Oct 30 '12 at 23:04











  • I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

    – JanezKranjski
    Feb 21 '18 at 4:55
















150












150








150


35






I have an Ubuntu server to which I am connecting using SSH.



I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.



Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.



But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.



Do you know how i could deal with this ?



If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.










share|improve this question
















I have an Ubuntu server to which I am connecting using SSH.



I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.



Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.



But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.



Do you know how i could deal with this ?



If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.







server ssh root webserver






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Oct 31 '12 at 9:35









Jorge Castro

36.6k106422617




36.6k106422617










asked Oct 29 '12 at 21:03









Dimitris SapikasDimitris Sapikas

875279




875279













  • This seems more like a question for your virtual server provider, or for the putty or winscp developers.

    – dobey
    Oct 29 '12 at 21:17






  • 6





    @dobey you obviusly wrong , it is about ubuntu privileges !

    – Dimitris Sapikas
    Oct 30 '12 at 10:14






  • 6





    Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

    – Sergey
    Oct 30 '12 at 22:02











  • Copying protected files between servers in one line? should help.

    – Gilles
    Oct 30 '12 at 23:04











  • I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

    – JanezKranjski
    Feb 21 '18 at 4:55





















  • This seems more like a question for your virtual server provider, or for the putty or winscp developers.

    – dobey
    Oct 29 '12 at 21:17






  • 6





    @dobey you obviusly wrong , it is about ubuntu privileges !

    – Dimitris Sapikas
    Oct 30 '12 at 10:14






  • 6





    Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

    – Sergey
    Oct 30 '12 at 22:02











  • Copying protected files between servers in one line? should help.

    – Gilles
    Oct 30 '12 at 23:04











  • I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

    – JanezKranjski
    Feb 21 '18 at 4:55



















This seems more like a question for your virtual server provider, or for the putty or winscp developers.

– dobey
Oct 29 '12 at 21:17





This seems more like a question for your virtual server provider, or for the putty or winscp developers.

– dobey
Oct 29 '12 at 21:17




6




6





@dobey you obviusly wrong , it is about ubuntu privileges !

– Dimitris Sapikas
Oct 30 '12 at 10:14





@dobey you obviusly wrong , it is about ubuntu privileges !

– Dimitris Sapikas
Oct 30 '12 at 10:14




6




6





Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

– Sergey
Oct 30 '12 at 22:02





Why is this closed? This is a perfectly valid question about copying files with scp - every web developer is familiar with this situation

– Sergey
Oct 30 '12 at 22:02













Copying protected files between servers in one line? should help.

– Gilles
Oct 30 '12 at 23:04





Copying protected files between servers in one line? should help.

– Gilles
Oct 30 '12 at 23:04













I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

– JanezKranjski
Feb 21 '18 at 4:55







I have a similar problem. I create a file (HTML in this case) on Windows computer and try to copy it with WinSCP to /var/www/html/website folder. And it says that there is a permission problem. Because I can copy to my /home folder I copied the file in two steps, but it isn't very convenient :-) I tried with adding my user to www-data group, but it didn't help. Any idea why adding to user to www-data still don't allow user to copy a file to folder which is owned by www-data group?

– JanezKranjski
Feb 21 '18 at 4:55












10 Answers
10






active

oldest

votes


















111














You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.



scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
ssh user@server.tld
$ sudo mv /some/folder /some/folder/requiring/perms
# YOU MAY NEED TO CHANGE THE OWNER like:
# sudo chown -R user:user folder


Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.



Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.



Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.






share|improve this answer


























  • i didn't get the first solution , could you please be a litle more spesific ?

    – Dimitris Sapikas
    Oct 30 '12 at 9:42











  • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

    – Dimitris Sapikas
    Oct 30 '12 at 9:44






  • 7





    Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

    – Sergey
    Oct 30 '12 at 21:58











  • hm.. it works fine ! thank you :)

    – Dimitris Sapikas
    Oct 31 '12 at 19:06



















31














Another method is to copy using tar + ssh instead of scp:



tar -c -C ./my/local/dir 
| ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"





share|improve this answer





















  • 2





    This is the best way to do it.

    – mttdbrd
    Mar 30 '15 at 20:31






  • 2





    I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

    – IBBoard
    Oct 26 '15 at 16:35






  • 1





    @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

    – Alexander Bird
    Aug 18 '16 at 15:21








  • 2





    @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

    – IBBoard
    Aug 19 '16 at 19:36











  • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

    – forumulator
    Aug 9 '18 at 9:07



















24














You can also use ansible to accomplish this.



Copy to remote host using ansible's copy module:



ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all


Fetch from remote host using ansible's fetch module:



ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all


NOTE:




  • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.


  • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.


  • flat=yes copies just the file, doesn't copy whole remote path leading to the file

  • Using wildcards in the file paths isn't supported by these ansible modules.

  • Copying a directory is supported by the copy module, but not by the fetch module.


Specific Invocation for this Question



Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:



cd sourcedir && 
ansible
--inventory-file hostname,
--become
--become-method sudo
--become-user root
--module-name copy
--args "src=. dest=/var/www/"
all


With the concise invocation being:



cd sourcedir && 
ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all


P.S., I realize that saying "just install this fabulous tool" is kind of a
tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.






share|improve this answer


























  • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

    – Mike D
    Feb 16 '16 at 19:30











  • @MikeD: how do the above changes look?

    – erik.weathers
    Feb 19 '16 at 3:08






  • 1





    Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

    – mwfearnley
    Jun 10 '16 at 15:37






  • 1





    @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

    – erik.weathers
    Jun 15 '16 at 3:00






  • 1





    Way to go thinking outside the box! Great use of Ansible

    – jonatan
    Sep 16 '18 at 9:19



















13














Quick way



From server to local machine:



ssh user@server "sudo cat /etc/dir/file" > /home/user/file


From local machine to server:



cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"





share|improve this answer





















  • 2





    This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

    – markgo2k
    May 16 '18 at 18:12











  • I think the question was about uploading a file from local to remote and not vice versa

    – Korayem
    Dec 19 '18 at 6:53



















12














When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:



Assuming your user name was dimitri, you could use this command.



sudo chown -R dimitri:dimitri /home/dimitri


From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.






share|improve this answer
























  • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

    – Dimitris Sapikas
    Oct 30 '12 at 9:47






  • 2





    Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

    – trognanders
    Oct 30 '12 at 17:48











  • hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

    – Dimitris Sapikas
    Oct 30 '12 at 19:05



















8














May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?



For example, to upload files with owner www-data:



rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www


In your case, if you need root privileges, command will be like this:



rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www


See: scp to remote server with sudo.






share|improve this answer

































    5














    If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.



    On your machine:



    ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME


    In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.



    On the server, start the file transfer like this:



    cd /var/www/
    sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .





    share|improve this answer































      1














      You may use script I've written being inspired by this topic:



      touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/


      but this requires some crazy stuff (which is btw. automatically done by script)




      1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer

      2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user


      Here goes the script:



      interface=wlan0
      if [[ $# -ge 3 ]]; then interface=$3; fi
      thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
      thisUser=$(whoami)
      localFilePath=/tmp/justfortest
      destIP=192.168.0.2
      destUser=silesia
      #dest
      #destFolderOnRemoteMachine=/opt/glassfish/glassfish/
      #destFolderOnRemoteMachine=/tmp/

      if [[ $# -eq 0 ]]; then
      echo -e "Send file to remote server to locatoin where root permision is needed.ntusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
      echo -e "Example: nttouch /tmp/justtest &&nt $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
      exit 1
      fi

      localFilePath=$1

      test -e $localFilePath

      destString=$2
      usernameAndHost=$(echo $destString | cut -f1 -d':')

      if [[ "$usernameAndHost" == *"@"* ]]; then
      destUser=$(echo $usernameAndHost | cut -f1 -d'@')
      destIP=$(echo $usernameAndHost | cut -f2 -d'@')
      else
      destIP=$usernameAndHost
      destUser=$thisUser
      fi

      destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

      set -e #stop script if there is even single error

      echo 'First step: we need to be able to execute scp without any user interaction'
      echo 'generating public key on machine, which will receive file'
      ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
      echo 'Done'

      echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

      key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
      if ! grep "$key" ~/.ssh/authorized_keys; then
      echo $key >> ~/.ssh/authorized_keys
      echo 'Added key to authorized hosts'
      else
      echo "Key already exists in authorized keys"
      fi

      echo "We will want to execute sudo command remotely, which means turning off asking for password"
      echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
      echo 'This you have to do manually: '
      echo -e "execute in new terminal: ntssh $destUser:$destIPnPress enter when ready"
      read
      echo 'run there sudo visudo'
      read
      echo 'change '
      echo ' %sudo ALL=(ALL:ALL) ALL'
      echo 'to'
      echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
      echo "After this step you will be done."
      read

      listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

      if [[ "$listOfFiles" != "" ]]; then
      echo "Sending by executing command, in fact, receiving, file on remote machine"
      echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
      echo -e "Executing nt""identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" non remote machine"
      ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
      ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\n}/$(basename $localFilePath)"
      if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sentnt$localFilePath nto nt$destStringnFind more at http://arzoxadi.tk"; fi
      else
      echo "something went wrong with executing sudo on remote host, failure"

      fi
      ENDOFSCRIPT
      ) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo





      share|improve this answer


























      • @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

        – test30
        Nov 22 '13 at 1:42



















      1














      You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp



      or here:




      #! /bin/bash
      res=0
      from=$1
      to=$2
      shift
      shift
      files="$@"
      if test -z "$from" -o -z "$to" -o -z "$files"
      then
      echo "Usage: $0 (file)*"
      echo "example: $0 server1 server2 /usr/bin/myapp"
      exit 1
      fi

      read -s -p "Enter Password: " sudopassword
      echo ""
      temp1=$(mktemp)
      temp2=$(mktemp)
      (echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
      sourceres=${PIPESTATUS[0]}
      if [ $? -ne 0 -o $sourceres -ne 0 ]
      then
      echo "Failure!" >&2
      echo "$from output:" >&2
      cat $temp1 >&2
      echo "" >&2
      echo "$to output:" >&2
      cat $temp2 >&2
      res=1
      fi

      rm $temp1 $temp2
      exit $res





      share|improve this answer


























      • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

        – Michael Lindman
        Jul 1 '15 at 15:25





















      0














      Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.



      (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) 
      | ssh remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""


      The little bit of extra magic here is the -S option to sudo. From the sudo man page:




      -S, --stdin
      Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.




      Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.



      You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.



      $ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
      remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""
      [sudo] password for bruce:
      [1]+ Stopped ( stty -echo; read passwd; stty echo; echo
      $passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c "tar -C
      /var/www/ -xz; echo""

      $ pstree -lap $$
      bash,7168
      ├─bash,7969
      ├─pstree,7972 -lap 7168
      └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`


      Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.



      The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.



      Note I simply added a host file entry for "remote_Host" to my local machine for the demo.






      share|improve this answer

























        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "89"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f208378%2fhow-do-i-copy-files-that-need-root-access-with-scp%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        10 Answers
        10






        active

        oldest

        votes








        10 Answers
        10






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        111














        You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.



        scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
        ssh user@server.tld
        $ sudo mv /some/folder /some/folder/requiring/perms
        # YOU MAY NEED TO CHANGE THE OWNER like:
        # sudo chown -R user:user folder


        Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.



        Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.



        Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.






        share|improve this answer


























        • i didn't get the first solution , could you please be a litle more spesific ?

          – Dimitris Sapikas
          Oct 30 '12 at 9:42











        • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

          – Dimitris Sapikas
          Oct 30 '12 at 9:44






        • 7





          Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

          – Sergey
          Oct 30 '12 at 21:58











        • hm.. it works fine ! thank you :)

          – Dimitris Sapikas
          Oct 31 '12 at 19:06
















        111














        You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.



        scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
        ssh user@server.tld
        $ sudo mv /some/folder /some/folder/requiring/perms
        # YOU MAY NEED TO CHANGE THE OWNER like:
        # sudo chown -R user:user folder


        Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.



        Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.



        Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.






        share|improve this answer


























        • i didn't get the first solution , could you please be a litle more spesific ?

          – Dimitris Sapikas
          Oct 30 '12 at 9:42











        • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

          – Dimitris Sapikas
          Oct 30 '12 at 9:44






        • 7





          Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

          – Sergey
          Oct 30 '12 at 21:58











        • hm.. it works fine ! thank you :)

          – Dimitris Sapikas
          Oct 31 '12 at 19:06














        111












        111








        111







        You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.



        scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
        ssh user@server.tld
        $ sudo mv /some/folder /some/folder/requiring/perms
        # YOU MAY NEED TO CHANGE THE OWNER like:
        # sudo chown -R user:user folder


        Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.



        Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.



        Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.






        share|improve this answer















        You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.



        scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
        ssh user@server.tld
        $ sudo mv /some/folder /some/folder/requiring/perms
        # YOU MAY NEED TO CHANGE THE OWNER like:
        # sudo chown -R user:user folder


        Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.



        Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.



        Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Feb 21 '18 at 1:28









        Brandon Bertelsen

        1,82212445




        1,82212445










        answered Oct 29 '12 at 22:22









        SergeySergey

        36.5k98799




        36.5k98799













        • i didn't get the first solution , could you please be a litle more spesific ?

          – Dimitris Sapikas
          Oct 30 '12 at 9:42











        • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

          – Dimitris Sapikas
          Oct 30 '12 at 9:44






        • 7





          Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

          – Sergey
          Oct 30 '12 at 21:58











        • hm.. it works fine ! thank you :)

          – Dimitris Sapikas
          Oct 31 '12 at 19:06



















        • i didn't get the first solution , could you please be a litle more spesific ?

          – Dimitris Sapikas
          Oct 30 '12 at 9:42











        • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

          – Dimitris Sapikas
          Oct 30 '12 at 9:44






        • 7





          Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

          – Sergey
          Oct 30 '12 at 21:58











        • hm.. it works fine ! thank you :)

          – Dimitris Sapikas
          Oct 31 '12 at 19:06

















        i didn't get the first solution , could you please be a litle more spesific ?

        – Dimitris Sapikas
        Oct 30 '12 at 9:42





        i didn't get the first solution , could you please be a litle more spesific ?

        – Dimitris Sapikas
        Oct 30 '12 at 9:42













        Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

        – Dimitris Sapikas
        Oct 30 '12 at 9:44





        Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access

        – Dimitris Sapikas
        Oct 30 '12 at 9:44




        7




        7





        Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

        – Sergey
        Oct 30 '12 at 21:58





        Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be

        – Sergey
        Oct 30 '12 at 21:58













        hm.. it works fine ! thank you :)

        – Dimitris Sapikas
        Oct 31 '12 at 19:06





        hm.. it works fine ! thank you :)

        – Dimitris Sapikas
        Oct 31 '12 at 19:06













        31














        Another method is to copy using tar + ssh instead of scp:



        tar -c -C ./my/local/dir 
        | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"





        share|improve this answer





















        • 2





          This is the best way to do it.

          – mttdbrd
          Mar 30 '15 at 20:31






        • 2





          I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

          – IBBoard
          Oct 26 '15 at 16:35






        • 1





          @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

          – Alexander Bird
          Aug 18 '16 at 15:21








        • 2





          @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

          – IBBoard
          Aug 19 '16 at 19:36











        • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

          – forumulator
          Aug 9 '18 at 9:07
















        31














        Another method is to copy using tar + ssh instead of scp:



        tar -c -C ./my/local/dir 
        | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"





        share|improve this answer





















        • 2





          This is the best way to do it.

          – mttdbrd
          Mar 30 '15 at 20:31






        • 2





          I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

          – IBBoard
          Oct 26 '15 at 16:35






        • 1





          @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

          – Alexander Bird
          Aug 18 '16 at 15:21








        • 2





          @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

          – IBBoard
          Aug 19 '16 at 19:36











        • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

          – forumulator
          Aug 9 '18 at 9:07














        31












        31








        31







        Another method is to copy using tar + ssh instead of scp:



        tar -c -C ./my/local/dir 
        | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"





        share|improve this answer















        Another method is to copy using tar + ssh instead of scp:



        tar -c -C ./my/local/dir 
        | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 19 '16 at 19:59









        IBBoard

        19217




        19217










        answered Oct 3 '14 at 18:56









        Willie WheelerWillie Wheeler

        50156




        50156








        • 2





          This is the best way to do it.

          – mttdbrd
          Mar 30 '15 at 20:31






        • 2





          I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

          – IBBoard
          Oct 26 '15 at 16:35






        • 1





          @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

          – Alexander Bird
          Aug 18 '16 at 15:21








        • 2





          @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

          – IBBoard
          Aug 19 '16 at 19:36











        • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

          – forumulator
          Aug 9 '18 at 9:07














        • 2





          This is the best way to do it.

          – mttdbrd
          Mar 30 '15 at 20:31






        • 2





          I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

          – IBBoard
          Oct 26 '15 at 16:35






        • 1





          @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

          – Alexander Bird
          Aug 18 '16 at 15:21








        • 2





          @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

          – IBBoard
          Aug 19 '16 at 19:36











        • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

          – forumulator
          Aug 9 '18 at 9:07








        2




        2





        This is the best way to do it.

        – mttdbrd
        Mar 30 '15 at 20:31





        This is the best way to do it.

        – mttdbrd
        Mar 30 '15 at 20:31




        2




        2





        I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

        – IBBoard
        Oct 26 '15 at 16:35





        I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo.

        – IBBoard
        Oct 26 '15 at 16:35




        1




        1





        @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

        – Alexander Bird
        Aug 18 '16 at 15:21







        @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"

        – Alexander Bird
        Aug 18 '16 at 15:21






        2




        2





        @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

        – IBBoard
        Aug 19 '16 at 19:36





        @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See serverfault.com/questions/14389/…

        – IBBoard
        Aug 19 '16 at 19:36













        This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

        – forumulator
        Aug 9 '18 at 9:07





        This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory.

        – forumulator
        Aug 9 '18 at 9:07











        24














        You can also use ansible to accomplish this.



        Copy to remote host using ansible's copy module:



        ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all


        Fetch from remote host using ansible's fetch module:



        ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all


        NOTE:




        • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.


        • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.


        • flat=yes copies just the file, doesn't copy whole remote path leading to the file

        • Using wildcards in the file paths isn't supported by these ansible modules.

        • Copying a directory is supported by the copy module, but not by the fetch module.


        Specific Invocation for this Question



        Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:



        cd sourcedir && 
        ansible
        --inventory-file hostname,
        --become
        --become-method sudo
        --become-user root
        --module-name copy
        --args "src=. dest=/var/www/"
        all


        With the concise invocation being:



        cd sourcedir && 
        ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all


        P.S., I realize that saying "just install this fabulous tool" is kind of a
        tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.






        share|improve this answer


























        • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

          – Mike D
          Feb 16 '16 at 19:30











        • @MikeD: how do the above changes look?

          – erik.weathers
          Feb 19 '16 at 3:08






        • 1





          Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

          – mwfearnley
          Jun 10 '16 at 15:37






        • 1





          @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

          – erik.weathers
          Jun 15 '16 at 3:00






        • 1





          Way to go thinking outside the box! Great use of Ansible

          – jonatan
          Sep 16 '18 at 9:19
















        24














        You can also use ansible to accomplish this.



        Copy to remote host using ansible's copy module:



        ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all


        Fetch from remote host using ansible's fetch module:



        ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all


        NOTE:




        • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.


        • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.


        • flat=yes copies just the file, doesn't copy whole remote path leading to the file

        • Using wildcards in the file paths isn't supported by these ansible modules.

        • Copying a directory is supported by the copy module, but not by the fetch module.


        Specific Invocation for this Question



        Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:



        cd sourcedir && 
        ansible
        --inventory-file hostname,
        --become
        --become-method sudo
        --become-user root
        --module-name copy
        --args "src=. dest=/var/www/"
        all


        With the concise invocation being:



        cd sourcedir && 
        ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all


        P.S., I realize that saying "just install this fabulous tool" is kind of a
        tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.






        share|improve this answer


























        • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

          – Mike D
          Feb 16 '16 at 19:30











        • @MikeD: how do the above changes look?

          – erik.weathers
          Feb 19 '16 at 3:08






        • 1





          Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

          – mwfearnley
          Jun 10 '16 at 15:37






        • 1





          @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

          – erik.weathers
          Jun 15 '16 at 3:00






        • 1





          Way to go thinking outside the box! Great use of Ansible

          – jonatan
          Sep 16 '18 at 9:19














        24












        24








        24







        You can also use ansible to accomplish this.



        Copy to remote host using ansible's copy module:



        ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all


        Fetch from remote host using ansible's fetch module:



        ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all


        NOTE:




        • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.


        • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.


        • flat=yes copies just the file, doesn't copy whole remote path leading to the file

        • Using wildcards in the file paths isn't supported by these ansible modules.

        • Copying a directory is supported by the copy module, but not by the fetch module.


        Specific Invocation for this Question



        Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:



        cd sourcedir && 
        ansible
        --inventory-file hostname,
        --become
        --become-method sudo
        --become-user root
        --module-name copy
        --args "src=. dest=/var/www/"
        all


        With the concise invocation being:



        cd sourcedir && 
        ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all


        P.S., I realize that saying "just install this fabulous tool" is kind of a
        tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.






        share|improve this answer















        You can also use ansible to accomplish this.



        Copy to remote host using ansible's copy module:



        ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all


        Fetch from remote host using ansible's fetch module:



        ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all


        NOTE:




        • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.


        • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.


        • flat=yes copies just the file, doesn't copy whole remote path leading to the file

        • Using wildcards in the file paths isn't supported by these ansible modules.

        • Copying a directory is supported by the copy module, but not by the fetch module.


        Specific Invocation for this Question



        Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:



        cd sourcedir && 
        ansible
        --inventory-file hostname,
        --become
        --become-method sudo
        --become-user root
        --module-name copy
        --args "src=. dest=/var/www/"
        all


        With the concise invocation being:



        cd sourcedir && 
        ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all


        P.S., I realize that saying "just install this fabulous tool" is kind of a
        tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Feb 19 '16 at 20:59

























        answered Feb 15 '16 at 7:03









        erik.weatherserik.weathers

        34124




        34124













        • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

          – Mike D
          Feb 16 '16 at 19:30











        • @MikeD: how do the above changes look?

          – erik.weathers
          Feb 19 '16 at 3:08






        • 1





          Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

          – mwfearnley
          Jun 10 '16 at 15:37






        • 1





          @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

          – erik.weathers
          Jun 15 '16 at 3:00






        • 1





          Way to go thinking outside the box! Great use of Ansible

          – jonatan
          Sep 16 '18 at 9:19



















        • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

          – Mike D
          Feb 16 '16 at 19:30











        • @MikeD: how do the above changes look?

          – erik.weathers
          Feb 19 '16 at 3:08






        • 1





          Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

          – mwfearnley
          Jun 10 '16 at 15:37






        • 1





          @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

          – erik.weathers
          Jun 15 '16 at 3:00






        • 1





          Way to go thinking outside the box! Great use of Ansible

          – jonatan
          Sep 16 '18 at 9:19

















        I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

        – Mike D
        Feb 16 '16 at 19:30





        I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ...

        – Mike D
        Feb 16 '16 at 19:30













        @MikeD: how do the above changes look?

        – erik.weathers
        Feb 19 '16 at 3:08





        @MikeD: how do the above changes look?

        – erik.weathers
        Feb 19 '16 at 3:08




        1




        1





        Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

        – mwfearnley
        Jun 10 '16 at 15:37





        Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.)

        – mwfearnley
        Jun 10 '16 at 15:37




        1




        1





        @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

        – erik.weathers
        Jun 15 '16 at 3:00





        @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity.

        – erik.weathers
        Jun 15 '16 at 3:00




        1




        1





        Way to go thinking outside the box! Great use of Ansible

        – jonatan
        Sep 16 '18 at 9:19





        Way to go thinking outside the box! Great use of Ansible

        – jonatan
        Sep 16 '18 at 9:19











        13














        Quick way



        From server to local machine:



        ssh user@server "sudo cat /etc/dir/file" > /home/user/file


        From local machine to server:



        cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"





        share|improve this answer





















        • 2





          This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

          – markgo2k
          May 16 '18 at 18:12











        • I think the question was about uploading a file from local to remote and not vice versa

          – Korayem
          Dec 19 '18 at 6:53
















        13














        Quick way



        From server to local machine:



        ssh user@server "sudo cat /etc/dir/file" > /home/user/file


        From local machine to server:



        cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"





        share|improve this answer





















        • 2





          This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

          – markgo2k
          May 16 '18 at 18:12











        • I think the question was about uploading a file from local to remote and not vice versa

          – Korayem
          Dec 19 '18 at 6:53














        13












        13








        13







        Quick way



        From server to local machine:



        ssh user@server "sudo cat /etc/dir/file" > /home/user/file


        From local machine to server:



        cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"





        share|improve this answer















        Quick way



        From server to local machine:



        ssh user@server "sudo cat /etc/dir/file" > /home/user/file


        From local machine to server:



        cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 9 hours ago









        KWILLIAMS

        33




        33










        answered Jan 16 '17 at 13:01









        Anderson LiraAnderson Lira

        13113




        13113








        • 2





          This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

          – markgo2k
          May 16 '18 at 18:12











        • I think the question was about uploading a file from local to remote and not vice versa

          – Korayem
          Dec 19 '18 at 6:53














        • 2





          This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

          – markgo2k
          May 16 '18 at 18:12











        • I think the question was about uploading a file from local to remote and not vice versa

          – Korayem
          Dec 19 '18 at 6:53








        2




        2





        This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

        – markgo2k
        May 16 '18 at 18:12





        This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better. This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

        – markgo2k
        May 16 '18 at 18:12













        I think the question was about uploading a file from local to remote and not vice versa

        – Korayem
        Dec 19 '18 at 6:53





        I think the question was about uploading a file from local to remote and not vice versa

        – Korayem
        Dec 19 '18 at 6:53











        12














        When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:



        Assuming your user name was dimitri, you could use this command.



        sudo chown -R dimitri:dimitri /home/dimitri


        From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.






        share|improve this answer
























        • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

          – Dimitris Sapikas
          Oct 30 '12 at 9:47






        • 2





          Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

          – trognanders
          Oct 30 '12 at 17:48











        • hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

          – Dimitris Sapikas
          Oct 30 '12 at 19:05
















        12














        When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:



        Assuming your user name was dimitri, you could use this command.



        sudo chown -R dimitri:dimitri /home/dimitri


        From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.






        share|improve this answer
























        • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

          – Dimitris Sapikas
          Oct 30 '12 at 9:47






        • 2





          Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

          – trognanders
          Oct 30 '12 at 17:48











        • hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

          – Dimitris Sapikas
          Oct 30 '12 at 19:05














        12












        12








        12







        When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:



        Assuming your user name was dimitri, you could use this command.



        sudo chown -R dimitri:dimitri /home/dimitri


        From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.






        share|improve this answer













        When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:



        Assuming your user name was dimitri, you could use this command.



        sudo chown -R dimitri:dimitri /home/dimitri


        From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Oct 29 '12 at 23:07









        trognanderstrognanders

        783712




        783712













        • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

          – Dimitris Sapikas
          Oct 30 '12 at 9:47






        • 2





          Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

          – trognanders
          Oct 30 '12 at 17:48











        • hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

          – Dimitris Sapikas
          Oct 30 '12 at 19:05



















        • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

          – Dimitris Sapikas
          Oct 30 '12 at 9:47






        • 2





          Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

          – trognanders
          Oct 30 '12 at 17:48











        • hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

          – Dimitris Sapikas
          Oct 30 '12 at 19:05

















        i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

        – Dimitris Sapikas
        Oct 30 '12 at 9:47





        i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S

        – Dimitris Sapikas
        Oct 30 '12 at 9:47




        2




        2





        Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

        – trognanders
        Oct 30 '12 at 17:48





        Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like.

        – trognanders
        Oct 30 '12 at 17:48













        hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

        – Dimitris Sapikas
        Oct 30 '12 at 19:05





        hmm .... that seems working fine , thank you ! sorry i can't upvote (system does not allow me to do ...)

        – Dimitris Sapikas
        Oct 30 '12 at 19:05











        8














        May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?



        For example, to upload files with owner www-data:



        rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www


        In your case, if you need root privileges, command will be like this:



        rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www


        See: scp to remote server with sudo.






        share|improve this answer






























          8














          May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?



          For example, to upload files with owner www-data:



          rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www


          In your case, if you need root privileges, command will be like this:



          rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www


          See: scp to remote server with sudo.






          share|improve this answer




























            8












            8








            8







            May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?



            For example, to upload files with owner www-data:



            rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www


            In your case, if you need root privileges, command will be like this:



            rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www


            See: scp to remote server with sudo.






            share|improve this answer















            May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?



            For example, to upload files with owner www-data:



            rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www


            In your case, if you need root privileges, command will be like this:



            rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www


            See: scp to remote server with sudo.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Feb 24 '18 at 15:54









            kenorb

            4,48013953




            4,48013953










            answered Nov 15 '16 at 10:14









            Alexey VazhnovAlexey Vazhnov

            9118




            9118























                5














                If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.



                On your machine:



                ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME


                In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.



                On the server, start the file transfer like this:



                cd /var/www/
                sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .





                share|improve this answer




























                  5














                  If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.



                  On your machine:



                  ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME


                  In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.



                  On the server, start the file transfer like this:



                  cd /var/www/
                  sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .





                  share|improve this answer


























                    5












                    5








                    5







                    If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.



                    On your machine:



                    ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME


                    In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.



                    On the server, start the file transfer like this:



                    cd /var/www/
                    sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .





                    share|improve this answer













                    If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.



                    On your machine:



                    ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME


                    In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.



                    On the server, start the file transfer like this:



                    cd /var/www/
                    sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .






                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 21 '16 at 16:52









                    bergoidbergoid

                    5111




                    5111























                        1














                        You may use script I've written being inspired by this topic:



                        touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/


                        but this requires some crazy stuff (which is btw. automatically done by script)




                        1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer

                        2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user


                        Here goes the script:



                        interface=wlan0
                        if [[ $# -ge 3 ]]; then interface=$3; fi
                        thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
                        thisUser=$(whoami)
                        localFilePath=/tmp/justfortest
                        destIP=192.168.0.2
                        destUser=silesia
                        #dest
                        #destFolderOnRemoteMachine=/opt/glassfish/glassfish/
                        #destFolderOnRemoteMachine=/tmp/

                        if [[ $# -eq 0 ]]; then
                        echo -e "Send file to remote server to locatoin where root permision is needed.ntusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
                        echo -e "Example: nttouch /tmp/justtest &&nt $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
                        exit 1
                        fi

                        localFilePath=$1

                        test -e $localFilePath

                        destString=$2
                        usernameAndHost=$(echo $destString | cut -f1 -d':')

                        if [[ "$usernameAndHost" == *"@"* ]]; then
                        destUser=$(echo $usernameAndHost | cut -f1 -d'@')
                        destIP=$(echo $usernameAndHost | cut -f2 -d'@')
                        else
                        destIP=$usernameAndHost
                        destUser=$thisUser
                        fi

                        destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

                        set -e #stop script if there is even single error

                        echo 'First step: we need to be able to execute scp without any user interaction'
                        echo 'generating public key on machine, which will receive file'
                        ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
                        echo 'Done'

                        echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

                        key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
                        if ! grep "$key" ~/.ssh/authorized_keys; then
                        echo $key >> ~/.ssh/authorized_keys
                        echo 'Added key to authorized hosts'
                        else
                        echo "Key already exists in authorized keys"
                        fi

                        echo "We will want to execute sudo command remotely, which means turning off asking for password"
                        echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
                        echo 'This you have to do manually: '
                        echo -e "execute in new terminal: ntssh $destUser:$destIPnPress enter when ready"
                        read
                        echo 'run there sudo visudo'
                        read
                        echo 'change '
                        echo ' %sudo ALL=(ALL:ALL) ALL'
                        echo 'to'
                        echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
                        echo "After this step you will be done."
                        read

                        listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

                        if [[ "$listOfFiles" != "" ]]; then
                        echo "Sending by executing command, in fact, receiving, file on remote machine"
                        echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
                        echo -e "Executing nt""identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" non remote machine"
                        ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
                        ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\n}/$(basename $localFilePath)"
                        if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sentnt$localFilePath nto nt$destStringnFind more at http://arzoxadi.tk"; fi
                        else
                        echo "something went wrong with executing sudo on remote host, failure"

                        fi
                        ENDOFSCRIPT
                        ) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo





                        share|improve this answer


























                        • @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                          – test30
                          Nov 22 '13 at 1:42
















                        1














                        You may use script I've written being inspired by this topic:



                        touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/


                        but this requires some crazy stuff (which is btw. automatically done by script)




                        1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer

                        2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user


                        Here goes the script:



                        interface=wlan0
                        if [[ $# -ge 3 ]]; then interface=$3; fi
                        thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
                        thisUser=$(whoami)
                        localFilePath=/tmp/justfortest
                        destIP=192.168.0.2
                        destUser=silesia
                        #dest
                        #destFolderOnRemoteMachine=/opt/glassfish/glassfish/
                        #destFolderOnRemoteMachine=/tmp/

                        if [[ $# -eq 0 ]]; then
                        echo -e "Send file to remote server to locatoin where root permision is needed.ntusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
                        echo -e "Example: nttouch /tmp/justtest &&nt $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
                        exit 1
                        fi

                        localFilePath=$1

                        test -e $localFilePath

                        destString=$2
                        usernameAndHost=$(echo $destString | cut -f1 -d':')

                        if [[ "$usernameAndHost" == *"@"* ]]; then
                        destUser=$(echo $usernameAndHost | cut -f1 -d'@')
                        destIP=$(echo $usernameAndHost | cut -f2 -d'@')
                        else
                        destIP=$usernameAndHost
                        destUser=$thisUser
                        fi

                        destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

                        set -e #stop script if there is even single error

                        echo 'First step: we need to be able to execute scp without any user interaction'
                        echo 'generating public key on machine, which will receive file'
                        ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
                        echo 'Done'

                        echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

                        key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
                        if ! grep "$key" ~/.ssh/authorized_keys; then
                        echo $key >> ~/.ssh/authorized_keys
                        echo 'Added key to authorized hosts'
                        else
                        echo "Key already exists in authorized keys"
                        fi

                        echo "We will want to execute sudo command remotely, which means turning off asking for password"
                        echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
                        echo 'This you have to do manually: '
                        echo -e "execute in new terminal: ntssh $destUser:$destIPnPress enter when ready"
                        read
                        echo 'run there sudo visudo'
                        read
                        echo 'change '
                        echo ' %sudo ALL=(ALL:ALL) ALL'
                        echo 'to'
                        echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
                        echo "After this step you will be done."
                        read

                        listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

                        if [[ "$listOfFiles" != "" ]]; then
                        echo "Sending by executing command, in fact, receiving, file on remote machine"
                        echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
                        echo -e "Executing nt""identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" non remote machine"
                        ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
                        ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\n}/$(basename $localFilePath)"
                        if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sentnt$localFilePath nto nt$destStringnFind more at http://arzoxadi.tk"; fi
                        else
                        echo "something went wrong with executing sudo on remote host, failure"

                        fi
                        ENDOFSCRIPT
                        ) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo





                        share|improve this answer


























                        • @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                          – test30
                          Nov 22 '13 at 1:42














                        1












                        1








                        1







                        You may use script I've written being inspired by this topic:



                        touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/


                        but this requires some crazy stuff (which is btw. automatically done by script)




                        1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer

                        2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user


                        Here goes the script:



                        interface=wlan0
                        if [[ $# -ge 3 ]]; then interface=$3; fi
                        thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
                        thisUser=$(whoami)
                        localFilePath=/tmp/justfortest
                        destIP=192.168.0.2
                        destUser=silesia
                        #dest
                        #destFolderOnRemoteMachine=/opt/glassfish/glassfish/
                        #destFolderOnRemoteMachine=/tmp/

                        if [[ $# -eq 0 ]]; then
                        echo -e "Send file to remote server to locatoin where root permision is needed.ntusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
                        echo -e "Example: nttouch /tmp/justtest &&nt $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
                        exit 1
                        fi

                        localFilePath=$1

                        test -e $localFilePath

                        destString=$2
                        usernameAndHost=$(echo $destString | cut -f1 -d':')

                        if [[ "$usernameAndHost" == *"@"* ]]; then
                        destUser=$(echo $usernameAndHost | cut -f1 -d'@')
                        destIP=$(echo $usernameAndHost | cut -f2 -d'@')
                        else
                        destIP=$usernameAndHost
                        destUser=$thisUser
                        fi

                        destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

                        set -e #stop script if there is even single error

                        echo 'First step: we need to be able to execute scp without any user interaction'
                        echo 'generating public key on machine, which will receive file'
                        ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
                        echo 'Done'

                        echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

                        key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
                        if ! grep "$key" ~/.ssh/authorized_keys; then
                        echo $key >> ~/.ssh/authorized_keys
                        echo 'Added key to authorized hosts'
                        else
                        echo "Key already exists in authorized keys"
                        fi

                        echo "We will want to execute sudo command remotely, which means turning off asking for password"
                        echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
                        echo 'This you have to do manually: '
                        echo -e "execute in new terminal: ntssh $destUser:$destIPnPress enter when ready"
                        read
                        echo 'run there sudo visudo'
                        read
                        echo 'change '
                        echo ' %sudo ALL=(ALL:ALL) ALL'
                        echo 'to'
                        echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
                        echo "After this step you will be done."
                        read

                        listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

                        if [[ "$listOfFiles" != "" ]]; then
                        echo "Sending by executing command, in fact, receiving, file on remote machine"
                        echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
                        echo -e "Executing nt""identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" non remote machine"
                        ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
                        ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\n}/$(basename $localFilePath)"
                        if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sentnt$localFilePath nto nt$destStringnFind more at http://arzoxadi.tk"; fi
                        else
                        echo "something went wrong with executing sudo on remote host, failure"

                        fi
                        ENDOFSCRIPT
                        ) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo





                        share|improve this answer















                        You may use script I've written being inspired by this topic:



                        touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/


                        but this requires some crazy stuff (which is btw. automatically done by script)




                        1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer

                        2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user


                        Here goes the script:



                        interface=wlan0
                        if [[ $# -ge 3 ]]; then interface=$3; fi
                        thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
                        thisUser=$(whoami)
                        localFilePath=/tmp/justfortest
                        destIP=192.168.0.2
                        destUser=silesia
                        #dest
                        #destFolderOnRemoteMachine=/opt/glassfish/glassfish/
                        #destFolderOnRemoteMachine=/tmp/

                        if [[ $# -eq 0 ]]; then
                        echo -e "Send file to remote server to locatoin where root permision is needed.ntusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
                        echo -e "Example: nttouch /tmp/justtest &&nt $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
                        exit 1
                        fi

                        localFilePath=$1

                        test -e $localFilePath

                        destString=$2
                        usernameAndHost=$(echo $destString | cut -f1 -d':')

                        if [[ "$usernameAndHost" == *"@"* ]]; then
                        destUser=$(echo $usernameAndHost | cut -f1 -d'@')
                        destIP=$(echo $usernameAndHost | cut -f2 -d'@')
                        else
                        destIP=$usernameAndHost
                        destUser=$thisUser
                        fi

                        destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

                        set -e #stop script if there is even single error

                        echo 'First step: we need to be able to execute scp without any user interaction'
                        echo 'generating public key on machine, which will receive file'
                        ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
                        echo 'Done'

                        echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

                        key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
                        if ! grep "$key" ~/.ssh/authorized_keys; then
                        echo $key >> ~/.ssh/authorized_keys
                        echo 'Added key to authorized hosts'
                        else
                        echo "Key already exists in authorized keys"
                        fi

                        echo "We will want to execute sudo command remotely, which means turning off asking for password"
                        echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
                        echo 'This you have to do manually: '
                        echo -e "execute in new terminal: ntssh $destUser:$destIPnPress enter when ready"
                        read
                        echo 'run there sudo visudo'
                        read
                        echo 'change '
                        echo ' %sudo ALL=(ALL:ALL) ALL'
                        echo 'to'
                        echo ' %sudo ALL=(ALL:ALL) NOPASSWD: ALL'
                        echo "After this step you will be done."
                        read

                        listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

                        if [[ "$listOfFiles" != "" ]]; then
                        echo "Sending by executing command, in fact, receiving, file on remote machine"
                        echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
                        echo -e "Executing nt""identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" non remote machine"
                        ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i $identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
                        ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\n}/$(basename $localFilePath)"
                        if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sentnt$localFilePath nto nt$destStringnFind more at http://arzoxadi.tk"; fi
                        else
                        echo "something went wrong with executing sudo on remote host, failure"

                        fi
                        ENDOFSCRIPT
                        ) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo






                        share|improve this answer














                        share|improve this answer



                        share|improve this answer








                        edited Nov 22 '13 at 1:41

























                        answered Nov 21 '13 at 19:47









                        test30test30

                        42947




                        42947













                        • @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                          – test30
                          Nov 22 '13 at 1:42



















                        • @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                          – test30
                          Nov 22 '13 at 1:42

















                        @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                        – test30
                        Nov 22 '13 at 1:42





                        @Braiam yeah, sure, sorry for link, the script is pretty long and that was the reason :)

                        – test30
                        Nov 22 '13 at 1:42











                        1














                        You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp



                        or here:




                        #! /bin/bash
                        res=0
                        from=$1
                        to=$2
                        shift
                        shift
                        files="$@"
                        if test -z "$from" -o -z "$to" -o -z "$files"
                        then
                        echo "Usage: $0 (file)*"
                        echo "example: $0 server1 server2 /usr/bin/myapp"
                        exit 1
                        fi

                        read -s -p "Enter Password: " sudopassword
                        echo ""
                        temp1=$(mktemp)
                        temp2=$(mktemp)
                        (echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
                        sourceres=${PIPESTATUS[0]}
                        if [ $? -ne 0 -o $sourceres -ne 0 ]
                        then
                        echo "Failure!" >&2
                        echo "$from output:" >&2
                        cat $temp1 >&2
                        echo "" >&2
                        echo "$to output:" >&2
                        cat $temp2 >&2
                        res=1
                        fi

                        rm $temp1 $temp2
                        exit $res





                        share|improve this answer


























                        • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                          – Michael Lindman
                          Jul 1 '15 at 15:25


















                        1














                        You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp



                        or here:




                        #! /bin/bash
                        res=0
                        from=$1
                        to=$2
                        shift
                        shift
                        files="$@"
                        if test -z "$from" -o -z "$to" -o -z "$files"
                        then
                        echo "Usage: $0 (file)*"
                        echo "example: $0 server1 server2 /usr/bin/myapp"
                        exit 1
                        fi

                        read -s -p "Enter Password: " sudopassword
                        echo ""
                        temp1=$(mktemp)
                        temp2=$(mktemp)
                        (echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
                        sourceres=${PIPESTATUS[0]}
                        if [ $? -ne 0 -o $sourceres -ne 0 ]
                        then
                        echo "Failure!" >&2
                        echo "$from output:" >&2
                        cat $temp1 >&2
                        echo "" >&2
                        echo "$to output:" >&2
                        cat $temp2 >&2
                        res=1
                        fi

                        rm $temp1 $temp2
                        exit $res





                        share|improve this answer


























                        • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                          – Michael Lindman
                          Jul 1 '15 at 15:25
















                        1












                        1








                        1







                        You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp



                        or here:




                        #! /bin/bash
                        res=0
                        from=$1
                        to=$2
                        shift
                        shift
                        files="$@"
                        if test -z "$from" -o -z "$to" -o -z "$files"
                        then
                        echo "Usage: $0 (file)*"
                        echo "example: $0 server1 server2 /usr/bin/myapp"
                        exit 1
                        fi

                        read -s -p "Enter Password: " sudopassword
                        echo ""
                        temp1=$(mktemp)
                        temp2=$(mktemp)
                        (echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
                        sourceres=${PIPESTATUS[0]}
                        if [ $? -ne 0 -o $sourceres -ne 0 ]
                        then
                        echo "Failure!" >&2
                        echo "$from output:" >&2
                        cat $temp1 >&2
                        echo "" >&2
                        echo "$to output:" >&2
                        cat $temp2 >&2
                        res=1
                        fi

                        rm $temp1 $temp2
                        exit $res





                        share|improve this answer















                        You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp



                        or here:




                        #! /bin/bash
                        res=0
                        from=$1
                        to=$2
                        shift
                        shift
                        files="$@"
                        if test -z "$from" -o -z "$to" -o -z "$files"
                        then
                        echo "Usage: $0 (file)*"
                        echo "example: $0 server1 server2 /usr/bin/myapp"
                        exit 1
                        fi

                        read -s -p "Enter Password: " sudopassword
                        echo ""
                        temp1=$(mktemp)
                        temp2=$(mktemp)
                        (echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
                        sourceres=${PIPESTATUS[0]}
                        if [ $? -ne 0 -o $sourceres -ne 0 ]
                        then
                        echo "Failure!" >&2
                        echo "$from output:" >&2
                        cat $temp1 >&2
                        echo "" >&2
                        echo "$to output:" >&2
                        cat $temp2 >&2
                        res=1
                        fi

                        rm $temp1 $temp2
                        exit $res






                        share|improve this answer














                        share|improve this answer



                        share|improve this answer








                        edited Jul 2 '15 at 16:17

























                        answered Jul 1 '15 at 15:16









                        sigmundasigmunda

                        112




                        112













                        • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                          – Michael Lindman
                          Jul 1 '15 at 15:25





















                        • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                          – Michael Lindman
                          Jul 1 '15 at 15:25



















                        Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                        – Michael Lindman
                        Jul 1 '15 at 15:25







                        Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source.

                        – Michael Lindman
                        Jul 1 '15 at 15:25













                        0














                        Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.



                        (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) 
                        | ssh remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""


                        The little bit of extra magic here is the -S option to sudo. From the sudo man page:




                        -S, --stdin
                        Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.




                        Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.



                        You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.



                        $ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
                        remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""
                        [sudo] password for bruce:
                        [1]+ Stopped ( stty -echo; read passwd; stty echo; echo
                        $passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c "tar -C
                        /var/www/ -xz; echo""

                        $ pstree -lap $$
                        bash,7168
                        ├─bash,7969
                        ├─pstree,7972 -lap 7168
                        └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`


                        Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.



                        The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.



                        Note I simply added a host file entry for "remote_Host" to my local machine for the demo.






                        share|improve this answer






























                          0














                          Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.



                          (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) 
                          | ssh remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""


                          The little bit of extra magic here is the -S option to sudo. From the sudo man page:




                          -S, --stdin
                          Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.




                          Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.



                          You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.



                          $ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
                          remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""
                          [sudo] password for bruce:
                          [1]+ Stopped ( stty -echo; read passwd; stty echo; echo
                          $passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c "tar -C
                          /var/www/ -xz; echo""

                          $ pstree -lap $$
                          bash,7168
                          ├─bash,7969
                          ├─pstree,7972 -lap 7168
                          └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`


                          Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.



                          The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.



                          Note I simply added a host file entry for "remote_Host" to my local machine for the demo.






                          share|improve this answer




























                            0












                            0








                            0







                            Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.



                            (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) 
                            | ssh remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""


                            The little bit of extra magic here is the -S option to sudo. From the sudo man page:




                            -S, --stdin
                            Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.




                            Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.



                            You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.



                            $ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
                            remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""
                            [sudo] password for bruce:
                            [1]+ Stopped ( stty -echo; read passwd; stty echo; echo
                            $passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c "tar -C
                            /var/www/ -xz; echo""

                            $ pstree -lap $$
                            bash,7168
                            ├─bash,7969
                            ├─pstree,7972 -lap 7168
                            └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`


                            Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.



                            The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.



                            Note I simply added a host file entry for "remote_Host" to my local machine for the demo.






                            share|improve this answer















                            Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.



                            (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) 
                            | ssh remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""


                            The little bit of extra magic here is the -S option to sudo. From the sudo man page:




                            -S, --stdin
                            Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.




                            Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.



                            You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.



                            $ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
                            remote_host "sudo -S bash -c "tar -C /var/www/ -xz; echo""
                            [sudo] password for bruce:
                            [1]+ Stopped ( stty -echo; read passwd; stty echo; echo
                            $passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c "tar -C
                            /var/www/ -xz; echo""

                            $ pstree -lap $$
                            bash,7168
                            ├─bash,7969
                            ├─pstree,7972 -lap 7168
                            └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`


                            Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.



                            The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.



                            Note I simply added a host file entry for "remote_Host" to my local machine for the demo.







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited May 6 '17 at 23:35

























                            answered May 6 '17 at 10:17









                            BruceBruce

                            1412




                            1412






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Ask Ubuntu!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f208378%2fhow-do-i-copy-files-that-need-root-access-with-scp%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                How did Captain America manage to do this?

                                迪纳利

                                南乌拉尔铁路局