7za a -mhe -p archive-name.7z original-file
a = add to archive
-mhe = encrypt headers as well as data
-p = prompt for password
You can also add the switch
-mx0
if you do not wish to do any compressing. For maximum compression, use:
7za a -mx9 -mhe -p archive-name.7z original-file
apt-get --no-install-recommends install package-name
To upgrade just a single package instead of all available upgrades:
apt-get install --only-upgrade package-name
Discards blocks on a file system, leaving the data that was on the device unrecoverable by software means. This functions as a quick and dirty way to wipe all the data on a block device. Even though the data is not actually overwritten, software analysis of the disk will see only zeroes.
To return to the directory that you just left:
cd -
Recursively remove executable bit from files, while omitting directories:
chmod -R -x+X target/
The capital "X" sets the executable bit on directories only.
FreeBSD: Set the date to 5:05 pm, January 21, 2018
date 201801211705 yyyymmddhhmm
If you only need to change the hours and minutes
date 1705
will change the time to 5:05 pm and leave the date unchanged.
TOTP codes regenerate every thirty seconds, starting at 0 and 30. To display the current second:
date +%S
You could also run a simple script that returns the number of seconds until the TOTP code is regenerated:
#!/bin/sh SEC=$( date +%S ) TL1=$((30-$SEC)) TL2=$((60-$SEC)) if [ $TL2 -ge 30 ]; then expr $TL1 else expr $TL2 fi
If you want to get even fancier, the following script will print the seconds in red if you have fewer than ten seconds left before the TOTP code regenerates:
#!/bin/sh SEC=$( date +%S ) TL1=$((30-$SEC)) TL2=$((60-$SEC)) if [ $TL2 -ge 30 ]; then TL=$TL1 else TL=$TL2 fi RED="\033[1;31m" NOCOLOR="\033[0m" if [ $TL -ge 10 ]; then echo $TL else echo "${RED}$TL${NOCOLOR}" fi
Overwrite with zeroes a 133 byte file:
dd if=/dev/zero of=filename count=1 bs=133
Overwrite with zeroes a 1 MB byte file:
dd if=/dev/zero of=storage-bin count=1K bs=1024
Overwrite with zeroes a 1 GB byte file:
dd status=progress if=/dev/zero of=storage-bin count=1024K bs=1024 dd status=progress if=/dev/zero of=storage-bin count=1M bs=1024
On my home system, /dev/zero can be used to generate a 10G file in about 45 seconds. By contrast, /dev/urandom will take about three minutes. Do not even bother with /dev/random.
dd if=/path/to/iso-file of=/dev/sdX bs=1M status=progress
To create a 20G file in almost no time, use dd's "seek" argument:
dd if=/dev/zero of=filename bs=1G seek=20 count=0
Check the mx record for yandex.com at the name server dns1.yandex.net:
dig @dns1.yandex.net yandex.com mx
Check all record types in a zone:
dig yandex.com any
Note that an increasing number of authoritative DNS servers reject requests for type any. See bullet point 10 here.
For DNSSEC, to see if a DNSKEY is set at your domain's registry:
dig domain.tld dnskey
To see if your DNS host is validating the DNSKEY:
dig +dnssec domain.tld dnskey
and you should an RRSIG reply indicating that your DNS host has signed the domain.
On Debian-based systems dig is supplied by the package dnsutils, on FreeBSD by bind-tools.
Displays hardware information. Must be run as root. See this guide.
To display memory information:
dmidecode -t memory
List installed packages:
dpkg --get-selections
Show the status of package, PACKAGE:
dpkg-query --status PACKAGE
du -h --max-depth=1
Backup files in directory “source” to a remote server. The first time duplicity runs it will do a full backup. Subsequently, it will do an incremental backup of changes.
duplicity --encrypt-key gpg-key /home/user/source sftp://host//home/user/target duplicity --encrypt-key gpg-key /home/user/source file:///home/user/local-target duplicity restore sftp://host//home/user/backup /home/user/local-restore-directory
On incremental backups, some versions of duplicity will return the following error message related to GnuPG:
Error processing remote manifest
This is a known and benign error message that does not indicate any failures in the backup.
If you enable one of the color modes, then [shift]-5 will cycle through the color schemes for that mode.
You can toggle the numbering of hyperlinks with the period "."
My Emacs notes have their own page.
fallocate: Preallocate or deallocate space to a file
This command can be used to create large files faster than dd. To create an empty 1 MB file:
fallocate -l 1M filename
The -l switch specifies the size. K=kilobytes. M=Megabytes. G=Gigabytes. The default is bytes. More specifically, M = 1024*1024 bytes but MB = 1000*1000.
A much more user-friendly version of the traditional "find" command. Debian has renamed the upstream binary from "fd" to "fdfind" but this change is not mentioned in the Debian man page, which is still located at "man fd".
Create a local directory for git repositories. Then, in that directory, retrieve the remote repository that you wish to work on locally:
git clone git@github.com:oldfolio/notes3e.git
Note that the above command presupposes that you have added an SSH key to your GitHub acount. Over time, your local folder can grow quite large with the record of changes that git keeps in the .git directory. One solution is to run the above command in a new folder and use the new smaller folder as your working directory.
You may also want to set the following in your global configuration:
git config --global pull.rebase false git config --global core.excludesFile ~/.gitignore
A good initial ~/.gitignore file might include:
.DS_Store
Edit locally whatever files you wish to change. To update the remote repository:
git diff (optional, to see changes) git add -u # This adds all files that have been updated. git add . # This adds all files in current directory, i.e. # untracked files will become tracked files. git commit (or git commit -am "Update message") git push
Add a new file:
git add FILENAME
To clean up your clone so that it matches the origin, you can run:
git fetch origin
Completely discard local changes and set your clone to match the origin master:
git fetch git reset --hard HEAD git merge origin/master
Resetting your clone makes it match the origin, discarding your local differences.
Keep local changes, but update your repo before pushing them:
git fetch git stash git merge origin/master git stash pop
Stashing creates a commit that is not visible to the branch's tracking. You then merge the origin with your local clone. Finally, you retrieve the stashed commit, discarding the stash in the process.
If you create a new repository locally and want to push it to Github, you would:
$ cd LOCAL_REPO_DIRECTORY $ git remote add origin git@github.com:oldfolio/REPO_NAME.git $ git push origin master
The above steps do the reverse of what cloning a Github repository to your local machine does (as described in the cloning section above).
You do not need remotes to use git to track changes. You can use git to manage a directory without setting up a remote if you do not need the files synchronized to other devices. The git workflow is the same, except that there is no pushing or pulling. If you want to set up a remote later, you can do that. Simply create an empty bare repo, go back to the original directory, and run:
git remote add origin URL-TO-BARE-REPO
Then, still in the original directory, run:
git push --set-upstream origin master
To host a static site at Github pages, create a repository for the site. In the root directory for the site, place a text file named CNAME. The content of the CNAME file should simply be the domain name you wish to use for the site, e.g. notes.oldfolio.org. Then create a CNAME record at your domain’s DNS host that points to USERNAME.github.io:
notes 300 IN CNAME oldfolio.github.io.
You can then check the Enforce HTTPS option in your repository’s settings.
To check the status of your repository:
git status
When you are away from your local folder you can still edit your site by logging into Github and editing files there. You would just need to remember to pull those changes into your local folder with
git pull [origin master]
Create the host repository:
git init --bare repository-name.git
Then, you will need to clone that repository on whatever machines you wish to use when working in that repository:
git clone username@owl:git/repository-name.git
where "owl" is the ~/.ssh/config "Host" and "git/repository-name.git" is the path to the main repository hosting home, relative to the git-hosting user's home directory.
To view a summary of changes to a file:
git log relative-path-to/file
To view the most recent changes to a file:
git show --pretty=medium relative-path-to/file
Unrelated Histories
If you commit two unrelated files in different clones, git will accept the first one you push but reject the second "refusing to merge unrelated histories". You can get around this by running
git merge --allow-unrelated-histories
in the second clone. You can then push the second file. Of course, you also then need to pull that change back in the first clone.
To edit most recent old commit message:
git rebase -i HEAD~1
This will bring up an editor window. The top line will contain an instruction to git. The default is pick. You will likely want to change that to r or reword. Once the proper command to git is specified, save the buffer and exit the editor. This will bring up another editor window allowing you to edit the commit message. After you have finished editing the commit message save it. This will have changed the git message locally. You still need to push the git message to the master. Because the associated change has already been pushed you will need to force the push:
git push --force
To restore an old version of a file:
git checkout commit-id-number relative-path-to/file git commit -am "commit message" git push
The checkout command first reads the file into the index, then copies it into the working tree, so there's no need to use git add to add it to the index in preparation for committing.
For file conflict resolution, see the Pro Git Book.
If you have not yet added a file, you can simply delete. Until you add a file git is not paying attention to that file. Once you do add a file, the file is staged but not committed. You can unstage a file with:
git restore --staged filename
At that point, you can simply delete the file if you no longer want it. No harm, no foul.
Searching a Git Repository
git grep -n search-term
will return every instance of search-term in the form of:
file-name:line-number:The text of the line in which search-term appears.
If you inadvertently delete your git origin repository, you can re-create it from one of your clones.
Create a new bare repository in the directory you use for origin repositories. Then, enter the directory in which the clone lives and set the URL for its origin repository to the new repo you just created:
$ git remote set-url origin mm@hp1:git/new-repo
Then push your clone to the new origin:
$ git push
List remotes:
$ git remote -v
You can adjust how much git compresses its packed files on a numerical scale from -1 to 9, with 9 being the greatest amount of compression. The default is -1. To adjust the compression level for a specific repository, run the following command from within the repository:
git config core.compression 4
To set a compression level globally, run:
git config --global core.compression 4
This last command will add a
compression = 4
line to the [core] section of your ~/.gitconfig file.
After adjusting the compression level, you should repack the repository:
git repack -F -d
The -F switch tells git to apply the new compression level to old files and not just to ones added after the compression level was changed. The -d switch tells git to delete any files that are no longer needed once the repacking has finished.
Generate a new elliptical curve key pair (ECC) instead of an older RSA key pair:
gpg --expert --pinentry-mode=loopback --full-gen-key
In the above example --pinentry-mode=loopback is used because I am usually working in a console environment. If you are working in a graphical environment, you probably do not need it.
Simple symmetric file encryption:
gpg -c --cipher-algo blowfish filename.txt
Encrypt to a specific user/recipient:
gpg -e -r USER file.txt
Create a detached, ascii-enarmored signature specifying which key to use:
gpg -u key-to-use -a --output file.sig --detach-sig file.txt
Create a non-detached, ascii-enarmored signature specifying which key to use:
gpg -u key-to-use --clearsign file.txt
Verify detached signature:
gpg --verify signature.sig signed-file.txt
Export public key:
gpg -a --export {key-identifier} > public-key.asc
Export secret/private key:
gpg -a --export-secret-keys {key-identifier} > secret-key.asc
If you should ever need to edit your ~/.gnupg/gpg-agent.conf file, you will need to reload the gpg-agent once you are finished editing.
$ gpg-connect-agent reloadagent /bye
Use extreme caution if you change the gpg-agent to pinentry-curses. Doing so breaks the graphical version of Emacs, and I have not yet found a work-around. If you will be working remotely with GnuPG encrypted files, you may need to set the agent to pinentry-curses. (See the dot file above.) Otherwise, the gpg-agent will expect a graphical environment -- and fail when one is not present.
UPDATE: You can use GPG2 in NON-GRAPHICAL ENVIRONMENTS without the annoyances of the graphical gpg-agent. You need to edit two files:
~/.gnupg/gpg-agent.conf ~/.gnupg/gpg.conf
Add the line
allow-loopback-pinentry
to ~/.gnupg/gpg-agent.conf and add the line
pinentry-mode loopback
to ~/.gnupg/gpg.conf.
Another way to implement GPG2 in NON-GRAPHICAL ENVIRONMENTS, specifically for use with Emacs, is to add the following lines to your ~/.gnupg/gpg-agent.conf file:
allow-emacs-pinentry allow-loopback-pinentry
Remember to run:
$ gpg-connect-agent reloadagent /bye
Then, add the following line to your ~/.emacs.d/init.el file, preferably near the other "epa" lines in order to keep all the gnupg config lines together::
(setq epa-pinentry-mode 'loopback)
To search for all files in a directory structure containing a string of text either of the following strings will work:
$ grep -rni ./ -e 'text to search for' $ grep -rni "text to search for" ./* -r = recursive (-R in OpenBSD) -n = list line number where text is found -i = case insensitive search
Note that this searches file contents but not file names. You can add the -w switch if you want the string of text treated as a whole word. For instance, a search with -w for 'text' will not return instances of 'texts'. A search without the -w switch will return 'texts', 'textual', etc.
OpenBSD's version of grep does not support the --color switch. In fact, OpenBSD's version of grep does not support any colorized option. Also, the recursive switch is capital R in OpenBSD, not lower case r as it is in Linux.
& will display & < will display < > will display >
You might also find this useful;
<p><a href=""></a></p> <p><a href="" rel="nofollow" target="_blank"></a></p>
Be sure not to overlook the <q> </q> tag, which adds curly quotation marks as demonstrated here
.
If the systemd journal is growing too large, you can reduce the space used in the following way:
journalctl --rotate journalctl --vacuum-size=100M
The rotate flag archives all the currently active journal files, and the vacuum-size flag removes all but the most recent 100M of archived journal files.
ln -s target-file link-name
# losetup -a # List the status of all loop devices # losetup /dev/loop0 filename # Associate loop device 0 with file filename # losetup -d /dev/loop0 # Detach loop device
Send 128 queries to only the nameservers specified:
namebench -q 128 -O 208.67.222.222, 1.1.1.1, 8.8.8.8
Some nethack commands:
@ = toggle autopickup d = drop i = open inventory r = read (as in read a spellbook) t = throw (as in throw a dagger) w = wield weapon f = fire arrows in quiver using wielded bow Q = place arrows in quiver S = save your game and exit P = put on (as in put on a ring) R = remove (as in remove a ring) W = wear armor or shield T = take off armor or shield Z = cast a spell ^d = bash (as in bash a door) #chat = talk to another character #loot = open a container #force = attempt to open a locked container #untrap = rescue pet from pit
Possible ~/.nethackrc
OPTIONS=color,time,hilite_pet,menucolors,!autopickup,role=valkyrie,race=human #OPTIONS=color,time,role=wizard,race=elf,gender=female
To see which TCP ports are open on your server:
netstat -ant
See, also, ss below.
Nextcloud's command line occ script is found in Nextcloud's root directory. It needs to be run as the web server user. To get a list of basic command options, run:
sudo -u www-data php --define apc.enable_cli=1 occ
Because Nextcloud does not monitor changes to the underlying file system, if you copy files directly to a users "files" directory, Nextcloud will not recognize those files as being present. To get Nextcloud to recognize the copied files, you need to use the occ tool to re-scan the user's files:
sudo -u www-data php --define apc.enable_cli=1 occ files:scan USERNAME
You can use openssl for simple file encryption:
openssl enc -blowfish -a -iter 12 -in filename.txt -out filename.enc
To decrypt the output file from the above example:
openssl enc -d -blowfish -a -iter 12 -in filename.enc -out filename.txt
For decryption, notice the addition of the -d switch and the reversal of the input and output filenames. Also, notice that all of the other options are included. Omitting any of those options will yield a failure to decrypt.
Some ciphers that you can use here.
Install under Debian:
apt-get install pass-extension-otp
The above command will install the OTP extension as well as the base password-store utility.
Specify password-store directory in ~/.profile or ~/.mkshrc, etc.
PASSWORD_STORE_DIR=/path/to/directory export PASSWORD_STORE_DIR
Create a new password-store database:
pass init [email-address-associated-with-GPG-key]
Enter a new account in the password-store:
pass insert -m Account-Name [or] pass insert -m Folder/Account-Name
Edit the information for an account that already exists in the password-store:
pass edit Account-Name
Show account information:
pass show Account-Name
Add a TOTP secret key to an account:
pass otp append Account-Name
When prompted enter a key URI of the form:
otpauth://totp/acct-name?secret=SECRET-KEY
You could also just add the above URI string to the password-store entry using the pass edit Account-Name command.
Print the current TOTP code:
pass otp code Account-Name
Remove an entry from the password-store:
pass rm Account-Name [or] pass rm -r Folder [to delete entire folder]
One of the values returned by ping is ttl. The ttl value can help to identify the pinged system's OS.
OS | ttl |
Linux | 64 |
MacOS | 64 |
OpenBSD | 255 |
Use rclone to sunchronize local files/folders with a Backblaze B2 bucket. On your home PC you should also install the Debian backblaze-b2 utility in order to manage your Backblaze buckets and account.
$ b2 create_bucket File-Cabinet-Master allPrivate $ rclone config # to set up or edit the configuration of remote storage $ rclone --progress sync /home/mm/File-Cabinet-Master b2_cabinet:File-Cabinet-Master $ rclone --progress sync b2_cabinet:File-Cabinet-Master scw_cabinet:file-cabinet $ rclone size b2_cabinet:File-Cabinet-Master
When synchronizing to an S2 bucket, you may want to add the --size-only flag in order to reduce the number of requests to the remote server.
rclone sync --progress --size-only /home/mm/File-Cabinet-Master scw_cabinet:file-cabinet
In addition to commercial remote services, you can also use rclone to synchronize over sftp to one of your own servers.
rclone --progress sync /home/mm/File-Cabinet-Master cedar_ssh:/home/mm/File-Cabinet-Master
When you set up a Backblaze B2 account as an rclone remote resource, you will need to use an application key.
The above set of instructions allow you to synchronize using a local directory as the source and a B2 bucket as the destination. If you wish to reverse that and use the B2 bucket as the source and a local directory as the destination, then use the b2 tool:
$ b2 sync --dryRun --threads 1 b2://File-Cabinet-Master/ /home/mm/File-Cabinet-Master
The default number of threads is 10. I use only one to avoid annoying others in my household who are also using the network.
Copy a single file to a target directory:
rclone copy FILENAME remote:directory/
Notice the trailing slash following the target directory.
If you wish to rename a file when you copy it, then you would use the copyto command:
rclone copyto FILENAME remote:directory/NEW-FILENAME
To mount a remote resource onto your filesystem:
rclone mount --daemon --vfs-cache-mode full remote: /local/mount/directory
You need the --vfs-cache-mode full in order to have full read-write access to the remote resource. The --daemon mode is needed under MacOS. To unmount under MacOS:
$ umount /Path/to/mount-point
To check if the source and the destination are the same without making any changes in either, run:
rclone check [--size-only --fast-list] source/directory scw_cabinet:
The --fast-list flag reduces the number of transactions in the request. This can resolve some 403 Rate Limit errors with Google Drive.
If you wish to exclude a directory (such as, /source-directory/ignore-this-directory/) from a sync operation, you would use a construction like the following:
rclone sync -P --exclude "/ignore-this-directory/**" /source-directory remote:target-directory
You need the double asterisk following the ignored directory in order to ignore subdirectories as well as the primary ignored directory. A single asterisk ignores all the files in the ignored directory but does not ignore subdirectories.
The remind package that ships with Debian Bookworm exhibits some strange behavior. The process continues to run in the background preventing me from logging out of my SSH sessions. So, whenever I start remind I need to kill the process before I try to exit my SSH session or the session will hang leaving more than just the remind process in a state of limbo.
rsync -avuP --delete source-directory/ host:/destination-directory
Notice that the source directory HAS a trailing slash, but that the destination directory does NOT have a trailing slash.
Hetzner storage boxes only recognize relative paths. So, your rsync command will need to look something like:
rsync -avuP --delete local-directory/ hetzner:./directory ^ notice the dot
Synchronize a single file:
rsync -avuP source-directory/filename host:/destination-directory/ ^
Notice that when synchronizing a single file a trailing slash *DOES* follow the destination directory.
If you wish to exclude a directory (such as, /source-directory/ignore-this-directory/) from a sync operation, you would use a construction like the following:
rsync -avuP --delete --exclude 'ignore-this-directory' /source-directory/ remote:/target-directory
If you need to preserve hard links, then you should use the
-H
switch.
Overwrite with random data and delete all files and subdirectories of DIRECTORY
srm -llr DIRECTORY
Overwrite with zeroes and delete all files and subdirectories of DIRECTORY
srm -llzr DIRECTORY
Under OpenBSD, the standard rm command followed the -P switch overwrites files once with random data before deleting. Add the -R switch to remove the entire file hierarchy, including subdirectories.
Shred is fairly straight-forward for operating on on individual files or groups of files in the same directory.
shred -n0 -z -u filename shred -n0 -z -u files*
Where shred gets tricky is when you need to delete all the files in a complex directory hierarchy. The way to do that is to combine shred with the find utility.
find DIRECTORY-NAME -type f -exec shred -n0 -zu {} \;
You can add the -v switch for more verbose output.
Report chrome or chromium total memory usage:
smem -t -k -c pss -P chrom | tail -n 1
Report dropbox total memory usage:
smem -t -k -c pss -P dropb | tail -n 1
Report firefox total memory usage:
smem -t -k -c pss -P firef | tail -n 1
Report opera total memory usage:
smem -t -k -c pss -P opera | tail -n 1
Report yandex browser total memory usage:
smem -t -k -c pss -P yandex_b | tail -n 1
Report yandex disk total memory usage:
smem -t -k -c pss -P yandex-d | tail -n 1
Report vivaldi total memory usage:
smem -t -k -c pss -P vivaldi | tail -n 1
The ss command is a successor to netstat. (See netstat above.) As long as netstat is available it is still a useful tool.
ss -at
Creating an SSH tunnel:
ssh -D 5222 remote-server -N -D = bind port -N = do not execute a remote command
To use with the Chromium browser:
chromium --proxy-server=socks5://localhost:5222
To use with Firefox, Pale Moon, etc.:
Preferences -> Advanced -> Network -> Connection -> Settings Manual proxy configuration SOCKS Host: 127.0.0.1 Port: 5222 No Proxy for: localhost, 127.0.0.1
If you install sshfs, you can mount your remote servers as an ordinary user. Use the mount options uid and gid so that the remote directory will belong to the local user.
$ sshfs server-nickname:/home/username /local/mountpoint -o uid=1000,gid=1000
To unmount:
$ fusermount -u /local/mountpoint
Report hardware information on FreeBSD systems:
# sysctl hw.model hw.machine hw.ncpu
To archive your /etc and /home directories:
# tar cvf /root/etc-home.tar /etc /home
To create an archive that excludes some files in the target:
tar cvf ~/archive.tar --exclude='excluded-directory/*' *
To list the files in an archive:
tar tvf archive.tar
The output of listing an archive's contents will look something like:
-rw-r--r-- user/group 1567 2019-12-12 10:50 ./file1.txt -rw-r--r-- user/group 1997 2019-12-12 10:50 ./file2.txt
You can remove an unwanted file from an archive in the following way:
tar --delete -f archive.tar ./file2.txt
where file2.txt is the unwanted file. You should note though that the --delete switch will not work on compressed archives.
Compression:
Create an archive with a time stamp in the archive name:
suffix=`date +%F-%H.%M` tar cvf /home/user/archive-$suffix.tar /path/to/target-directory/
Create an empty archive:
tar -cvf archive.tar -T /dev/null
Ctrl-b to enter commands
Detach the current session:
Ctrl-b d
Re-attach a previous session:
tmux attach -t 0
where "0" is the name of the previous session.
Generate a string of characters based on a defined input. You can use this to generate passwords. For example, the following will generate a 16 character string drawn from capital letters, lower case letters, and numbers:
tr -dc A-Za-z0-9 </dev/urandom | head -c 16; echo
If you want to add some special characters into the mix:
tr -dc 'A-Za-z0-9!$&[]^%+=$' </dev/urandom | head -c 16; echo
To change a user's primary login group:
usermod -g primarygroupname username
To add a user to a secondary group:
usermod -a -G secondarygroupname username
Using the -G switch without the -a switch will remove a user from all secondary groups except those specified by the current instance of the -G switch.
Change a user's username:
usermod --login new-user-name --move-home --home /home/new-home-directory old-user-name
When you change a user's username you will likely also want to change the name of the user's primary group:
groupmod --new-name new-group-name old-group-name
In most cases, the new-group-name will be the same as the new-user-name, and the old-group-name will be the same as the old-user-name.
Find each occurrence of 'foo' and replace it with 'bar':
:%s/foo/bar/g
When you need vim to behave like traditional vi:
vim -u NONE -C
The -u switch specifies which vimrc file to use, with the NONE argument instructing vim not to load any vimrc initializations. The -C switch instructs vim to behave in a way that is compatible with traditional vi. The -C switch by itself does not work because without "-u NONE" vim will respect your vimrc initializations.
Edit a remote file:
vim scp://user@server.com:22//home/user/filename
or
:e scp://user@server.com:22//home/user/filename # OR :e scp://SSH-Config-Host//home/user/filename # OR :e scp://SSH-Config-Host/filename # /home/user not needed because you # are automatically logged into that # directory
Browse a remote directory:
:e scp://user@server.com:22//home/user/ :e scp://SSH-Config-Host// # Directory specification not needed if # you wish to browse the directory you are # initially logged into.
Alternative Remote Construction
:e sftp://user@server//path/to/file
Prompt for an encryption key:
:X
Center text [based on a 75 character-wide line]:
:ce [75]
Set the maximum number of characters on a line to 75
set tw=75
Various editing tasks:
dd delete current line ~ switch case of characters (from CAPITALS to lower case or vice VERSA) U MAKE ALL SELECTED CHARACTERS CAPITALS/UPPER CASE u make all selected characters lower case J join next line to the current one > indent selected lines gq apply text formatting to selected region " specify a register "+ specify the clipboard "+y copy to clipboard "+d cut to clipboard "+P paste from clipboard before cursor "+p paste from clipbaord after cursor
To use vim scripts, install the Debian package vim-scripts. This will install many of the most useful scripts to the
/usr/share/vim-scripts
directory. Then, create a ~/.vim/plugin directory. Inside that directory, create a symbolic link to the script you wish to use.
ln -s /usr/share/vim-scripts/gnupg gnupg
Install package davfs2.
Add user to group davfs2:
usermod -a -G davfs2 username
Create authentication file /home/user/.davfs2/secrets. Remove any "group" or "other" permissions:
chmod go-r /home/user/.davfs2/secrets
The "secrets" file should contain the line:
/home/user/mount-point dav-user@server [password]
Create entry in /etc/fstab:
https://server.tld /home/user/mount-point davfs noauto,user,uid=1000,gid=1000 0 0
Now, the ordinary user should be able to mount the remote storage with the command:
mount /home/user/mount-point
echo 3 > /proc/sys/vm/drop_caches
Do a searchon "drop_caches" for additional information, including the differences between echo 1, echo 2, and echo 3.
I keep all of my notes in a very large text file. Here is a size comparison of the notes in different formats (October 2018):
RAW TEXT 2708583 bytes (100.00%) DOCX 1254433 bytes ( 46.31%) ODT 1103976 bytes ( 40.76%) GZ TEXT 1040337 bytes ( 38.41%) XZ TEXT 796340 bytes ( 29.40%) BZ2 TEXT 763485 bytes ( 28.19%)
PSS as reported by smem under my test scenario, January 19, 2021:
Chromium: 360M Firefox: 520M Palemoon: 180M Vivaldi: 210M
Memory usage reported by htop for different shells:
FREEBSD 11.3
bash 7840 3956 csh 7412 3800 ksh93 8232 4196 mksh 6608 2692 sh 7068 3064 tcsh 7412 3804
DEBIAN 10.1
bash 7599 4236 dash 2388 700 lksh 616 356 mksh 3164 2144 (non-static) mksh 848 580 (static) tcsh 6656 3288
Average ping speed from home to servers December 2020.
almond: 26.698 birch: 155.023 cedar: 45.158 elm: 47.287 fir: 27.191 larch: 45.240 teak: 65.452 SD: 48.601
Average ping speed from home to servers July 2020.
birch: 153.272 cedar: 71.759 fir: 34.283 larch: 92.014 pine: 145.663 SD: 44.081