azlux / log2ram Goto Github PK
View Code? Open in Web Editor NEWramlog like for systemd (Put log into a ram folder)
License: MIT License
ramlog like for systemd (Put log into a ram folder)
License: MIT License
in uninstall.sh is a typo at line 9
rm etc/log2ram.conf
the leading / is missing!
Hi, I tried to check the service. Apparently it is running:
By the way - it is in folder hdd.log , not log.hdd as pointed in readme.
On setup the uninstaller.sh
should be copied to the target system as well. Then the downloaded and extracted archive could be removed. This way you always know where to find the uninstaller and are prevented from getting lost of it.
The target could be one of these:
/usr/local/bin/log2ram-uninstaller.sh
/usr/local/bin/uninstall-log2ram.sh
Also maybe the uninstaller script could be installed without execution permission to prevent uninstalling by accident.
The Readme could then be updated like this:
curl -Lo log2ram.tar.gz https://github.com/azlux/log2ram/archive/master.tar.gz
tar xf log2ram.tar.gz
chmod +x log2ram-master/install.sh && sudo log2ram-master/install.sh
rm -r log2ram-master
…
(Because sometime we need it)
chmod +x /usr/local/bin/uninstall-log2ram.sh && uninstall-log2ram.sh
RequiresMountsFor=/var/log /var/log.hdd
should be changed to the right one
For some usage, ZRAM look like a good alternative to TMPFS mount point.
We can save more data into a small part of the ram.
Not sure if it's a good idea.
If I add this option, it will be a option (disabled by default)
Feel free to answers this ticket to give me your opinion / idea. Or even more, make a pull request.
Az
Hi.
I have installed log2ram according to instructions given.
I am running mosquitto and it is logging to /var/log/mosquitto/mosquitto.log
every 8-10 minutes in place of an hour. Even I have change the cron to daily sudo mv /etc/cron.hourly/log2ram /etc/cron.daily/log2ram
Just a couple of lines but the following on install, as it can often bang out with just a fail and the following will greatly minimise that.
# Remove a previous log2ram version
# ??
# rm -rf /var/log.hdd
# Make sure we start clean
rm -rf /var/hdd.log
# Make backup of pruned logs
mkdir -p /var/oldlog
cp -rfup /var/log/*.1 /var/oldlog/
cp -rfup /var/log/*.gz /var/oldlog/
cp -rfup /var/log/*.old /var/oldlog/
# Prune logs
rm -r /var/log/*.1
rm -r /var/log/*.gz
rm -r /var/log/*.old
# Clone /var/log
mkdir -p /var/hdd.log
mkdir -p /var/log/oldlog
cp -rfup /var/log/ -T /var/hdd.log/
sed -i '/^include.*/i olddir /var/log/oldlog' /etc/logrotate.conf
echo "##### Reboot to activate log2ram #####"
echo "##### edit /etc/log2ram.conf to configure options #####"
So that no logs are lost it just creates a dir
mkdir -p /var/oldlog
Copies old redundant logs.
cp -rfup /var/log/.1 /var/oldlog/
cp -rfup /var/log/.gz /var/oldlog/
cp -rfup /var/log/*.old /var/oldlog/
Then prunes and cleans so initial /var/log and /var/hdd.log will be clean live logs.
rm -r /var/log/.1
rm -r /var/log/.gz
rm -r /var/log/*.old
create clean /var/hdd.log and copy pruned /var/log/ so applications/services expecting log files will not fail as mount bind is directory tree only
mkdir -p /var/hdd.log
mkdir -p /var/log/oldlog
cp -rfup /var/log/ -T /var/hdd.log/
sed -i '/^include.*/i olddir /var/log/oldlog' /etc/logrotate.conf
Also add one line olddir /var/log/oldlog global logrotate directive
Currently running /etc/log2ram.conf
SIZE=10M
ZL2R=true
COMP_ALG=lz4
LOG_DISK_SIZE=30M
PRUNE_LEVEL=18M
pi@raspberrypi:~ $ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/root 14422 4933 8840 36% /
devtmpfs 213 0 213 0% /dev
tmpfs 217 0 217 0% /dev/shm
tmpfs 217 4 214 2% /run
tmpfs 5 1 5 1% /run/lock
tmpfs 217 0 217 0% /sys/fs/cgroup
/dev/mmcblk0p1 44 22 22 51% /boot
/dev/zram0 26 5 19 20% /var/log
tmpfs 44 0 44 0% /run/user/1000
pi@raspberrypi:~ $ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4 30M 8.7M 1.9M 2.3M 1 /var/log
/dev/zram1 lz4 650.2M 4K 64B 4K 1 [SWAP]
pi@raspberrypi:~ $ free -m
total used free shared buff/cache available
Mem: 433 111 108 4 213 269
Swap: 750 0 750
I'm only just realised that log2ram is causing issues .... so despite using a recent version I discovered at some point that Mosquitto service was not starting up properly - I could run Mosquitto normally - but not as the service.... then in the process of that I discovered that the Apache log folder had disappeared and of course without that Apache won't start.
I recreated the Apache log folder, took out log2ram, rebooted and now everything is working a fine - I really do need log2ram to work to reduce writes to SD - but it is causing these issues - or probably more correctly the interaction between Apache and log2ram - and Mosquitto and log2ram - is causing problems.
Is it possible to adapt log2 ram to init.d that is used on Gentoo Linux instead of Systemd?
pi@RasPiHole:~ $ `cd log2ram-master`
pi@RasPiHole:~/log2ram-master $ `systemctl status log2ram.service`
● log2ram.service - Log2Ram
Loaded: loaded (/etc/systemd/system/log2ram.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Sun 2019-02-10 14:01:29 CET; 3min 23s ago
Process: 217 ExecStart=/usr/local/bin/log2ram start (code=killed, signal=TERM)
Main PID: 217 (code=killed, signal=TERM)
Feb 10 14:01:29 RasPiHole log2ram[217]: [58B blob data]
Feb 10 14:01:29 RasPiHole log2ram[217]: [58B blob data]
Feb 10 14:01:29 RasPiHole log2ram[217]: [59B blob data]
Feb 10 14:01:29 RasPiHole log2ram[217]: mount: wrong fs type, bad option, bad superblock on log2ram,
Feb 10 14:01:29 RasPiHole log2ram[217]: missing codepage or helper program, or other error
Feb 10 14:01:29 RasPiHole log2ram[217]: In some cases useful info is found in syslog - try
Feb 10 14:01:29 RasPiHole log2ram[217]: dmesg | tail or so.
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
/log2ram-master $ `dmesg | tail`
[ 125.479972] systemd[1]: log2ram.service: Main process exited, code=killed, status=15/TERM
[ 125.480821] systemd[1]: Failed to start Log2Ram.
[ 125.481205] systemd[1]: log2ram.service: Unit entered failed state.
[ 125.481248] systemd[1]: log2ram.service: Failed with result 'timeout'.
[ 125.486038] systemd[1]: Starting Journal Service...
[ 125.594830] systemd[1]: Started Journal Service.
[ 125.643757] systemd-journald[2469]: Received request to flush runtime journal from PID 1
[ 126.593789] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup
[ 126.621515] Adding 102396k swap on /var/swap. Priority:-2 extents:1 across:102396k SSFS
[ 128.228864] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0xC5E1
You can use an overlayFS.
Just mount tmpfs on a created directory
mkdir -p /opt/zram$RAM_DEV
mount --verbose --types ext4 -o nosuid,noexec,nodev /dev/zram$RAM_DEV /opt/zram$RAM_DEV/
I am using zram and just create a dir same a the dev name and mount there.
mkdir -p /opt/zram$RAM_DEV/upper /opt/zram$RAM_DEV/workdir $ZRAM_DIR
$ZRAM_DIR would be /var/log but just make the upper and workdir in zram create /log/var after it has been bind mounted.
Then create your OverlayFS mount which negates any need to syncFromDisk as you have a OverlayFS with zram upper on top of the bind mount lower.
mount -t overlay -o lowerdir=$BIND_DIR,upperdir=/opt/zram$RAM_DEV/upper,workdir=/opt/zram$RAM_DEV/workdir overlay $ZRAM_DIR
The code is from https://github.com/StuartIanNaylor/zram-config/tree/OverlayFS where I have a ztab creating all zram dirs / swaps and also use OverlayFs to negate the need for syncFromDisk that might cause delay on large directories.
syncToDisk only writes updated file changes so should be much less and is still in operation.
Because of your wait function and various settings such as IgnoreOnIsolate=yes
The systemd init is very unfriendly with others as it seems to fail to give a completion.
I wanted to allow log2ram to start first and do its stuff before the logs and everything but even with an After=log2ram.target it would seem still to tangle with your mount delay.
I just throw in a load of After=.target that log2ram had as Before.target and start further down the boot order.
Which is no problem but yeah the service seems very unfriendly.
[Unit]
Description=Log2Ram
DefaultDependencies=no
Before=basic.target rsyslog.service syslog.target systemd-journald.service sysinit.target shutdown.target apache2.service
After=local-fs.target
Conflicts=shutdown.target reboot.target halt.target
RequiresMountsFor=/var/log /var/hdd.log
IgnoreOnIsolate=yes
[Service]
Type=oneshot
ExecStart= /usr/local/bin/log2ram start
ExecStop= /usr/local/bin/log2ram stop
ExecReload= /usr/local/bin/log2ram write
TimeoutStartSec=120
RemainAfterExit=yes
[Install]
WantedBy=sysinit.target
log2ram keeps current logs in memory but also keeps old logs from logrotate and vastly increase mem usage via what are essentially old non current logs not in use and already copied to /var/hdd.log
logrotate has an oldlog directive that unfortunately can not be used with different devices.
There is a workaround though as postrotate can call scripts to mv the contents of /oldlog elsewhere
So use the oldlog directive with /var/log/oldlog
postrotate scripts could be used but log2ram has an hourly routine that could equally move old logs form /var/log/oldlog to /var/hdd.log
I am sat here awaiting my next hourly hoping a global oldlog directive in /etc/logrotate.conf will move all old logs to /var/log/oldlogs
via a simple olddir /var/log/oldlog inclusion to /etc/logrotate.conf
My optimism that it maybe that simple is likely just that and expecting it will not capture logs in /etc/logrotate.d
But its an example that really something should and that the oldlog directive can be used as a staging directory for log pruning and moving to /log/hdd.log and not just copying.
Description=Log2Ram
I really struggled with journalctl entries and getting systemd to work in the fashion the Before= & After= directives where set.
After much wasted time I think the description should be lower case as changed log2zram and it seemed to make the difference. At least journalctl -u log2zram.service works anyway.
Main problem though seems to be the IgnoreOnIsolate=yes directive which in action seems to mean ignore other service requests totally and with it I can not seem to stop the occasional conflict.
Without out I just ran 293 consecutive boots without a conflict.
As then the After= declaration combined with the Before= of the other service seems to work a treat.
Hi @azlux
could you create v1.0.0
tag pls? :)
reported first here : https://linuxfr.org/wiki/tuto-howto-transferer-les-logs-en-ram-avec-log2ram#comment-1747645
tested on ubuntu 18.04
spartacus @ rome ~
└─ $ ▶ cd /tmp
spartacus @ rome /tmp
└─ $ ▶ curl -L https://github.com/azlux/log2ram/archive/master.tar.gz | tar xvzf -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 121 0 121 0 0 178 0 --:--:-- --:--:-- --:--:-- 178
100 4242 0 4242 0 0 3196 0 --:--:-- 0:00:01 --:--:-- 10500
log2ram-master/
log2ram-master/LICENSE
log2ram-master/README.md
log2ram-master/install.sh
log2ram-master/log2ram
log2ram-master/log2ram.conf
log2ram-master/log2ram.hourly
log2ram-master/log2ram.logrotate
log2ram-master/log2ram.service
log2ram-master/uninstall.sh
spartacus @ rome /tmp
└─ $ ▶ sudo chmod +x ./log2ram-master/install.sh && sudo ./log2ram-master/install.sh
[sudo] password for spartacus:
install: cannot stat 'log2ram.service': No such file or directory
install: cannot stat 'log2ram': No such file or directory
install: cannot stat 'log2ram.conf': No such file or directory
install: cannot stat 'uninstall.sh': No such file or directory
Failed to enable unit: Unit file log2ram.service does not exist.
install: cannot stat 'log2ram.hourly': No such file or directory
install: cannot stat 'log2ram.logrotate': No such file or directory
##### Reboot to activate log2ram #####
spartacus @ rome /tmp
└─ $ ▶ rm -r log2ram-master
spartacus @ rome /tmp
└─ $ ▶ sudo chmod +x /usr/local/bin/uninstall-log2ram.sh && sudo /usr/local/bin/uninstall-log2ram.sh
chmod: cannot access '/usr/local/bin/uninstall-log2ram.sh': No such file or directory
How do you uninstall this script? I have another program that depends on the log being in systemd but now it can't be found.
Thank you!
since last update i get a mail from cron with this content:
run-parts: /etc/cron.daily/log2ram exited with return code 1
The way the script uses rsync currently, results in the following (I think):
If log file has been modified since the last rsync, then copy the file on the SD card to a temporary file on the SD card, append to this temporary file, remove the original log file on the SD card, and finally rename the temporary file to the correct filename.
Invoking rsync with the --inplace
arguement would result in:
If the log file has been modified since the last rsync, append the contents to the existing file on the SD card.
My guess is that using --inplace
is likely to cause less SD card wear, BICBW. It might be a good idea to use the /sys filesystem I/O counters to check the total write IOPS with each use case...
Hi,
I am wondering, what is the writing frequency on the hard disk of the /var/log
folder a normal user really needs? (average user like small server or desktop usage)
I've heard sometime 2hours, and sometime 1day.
You create a filesystem with ext4 on the zramfs. It would be better to turn off journaling here. Journaling only helps if you can get back to the data after a crash, and there is no hope of that on a ram disk. You could either turn off journaling or switch to a bare bones filesystem like ext2.
Line 64 in 07b3599
It seems apache2 needs /var/log/apache2/. log2ram transfers this directory to /var/log.hdd/apache2.
I fixed it by adding :
@reboot sudo mkdir /var/log/apache2
@reboot sudo service apache2 restart
in crontab but there must be a better way?
How to fix this so apache2 can find its log folder on reboot?
I just noticed that log2ram had run out of space here, because my logs ballooned following massive spammer attacks onto my server.
As one consequence, I just suggested a change to fix the situation where, upon startup, it's already clear that tmpfs will run out of space. In that case, not only should log2ram exit, but the binds should be removed. Otherwise, we'll have no synched log data on tmpfs, but new logs will still be written there, resulting in loss of historical logs.
Now, to be safer, log2ram should also check load status of tmpfs regularly, probably during write(). And then, if an issue is discovered, ideally log2ram would exit, but not sure if that's feasible with probably lots of file writes open...? Alternatively, log2ram should at least issue some noticeable warnings in such situation.
Will think about this some more when I find the time... Or maybe someone else has some bright idea...
ZL2R: Enable zram compatibility (false by default). Check the comment on the config file. See https://github.com/StuartIanNaylor/zram-swap-config to configure a zram space on your raspberry before enable this option.
Doesn't require zram-swap-config or shouldn't, but do suggest taking a look at zram-swap-config as zram-config-0.5.deb is so broken its not true.
zram-swap-config just enables zram swaps and is separate from L2R using a zram drive.
Both are aware of any previous zram devices and just make another after the previous.
I just changed the Conflicts as if systemd runs them at exact same time then they both can see no previous and then conflict.
Shouldn't happen now but bet I can run a start boot and check a 100 times with no conflict then 1st time it will happen elsewhere. Having the conflict in zram-swap-config should be optimal as that service has little in the Before= / After sections of the systemd unit config.
Maybe have a look at https://github.com/StuartIanNaylor/log2zram as its a minimal Log2Ram using zram.
Hi mates, thank you a lot for this program!
I'm not an expert user of linux so I can be wrong here, but I noticed that system keeps writing entries on syslog file into var/log folder.. I mean before the "hour" when I suppose log2ram dump all lines into files.
For this reason I'm not sure it works correctly, but on syslog I have every hour the lines:
Apr 5 00:17:01 raspberrypi systemd[1]: Reloading Log2Ram.
Apr 5 00:17:01 raspberrypi systemd[1]: Reloaded Log2Ram.
For example
I would like to reduce as minimum as possible the writes on SD, a friend suggested me this:
$ cat /etc/fstab
proc /proc proc defaults 0 0
PARTUUID=7b4d7c24-01 /boot vfat defaults 0 2
PARTUUID=7b4d7c24-02 / ext4 defaults,noatime,commit=600,errors=remount-ro 0 1
tmpfs /tmp tmpfs defaults,noatime,nosuid 0 0
tmpfs /var/log tmpfs defaults,noatime,nosuid,size=16m 0 0
but I think his solution doesn't dump anything to file, just move all folder log into RAM, and I don't want to lose log files after rebbot/shutdown/power loss.
May you please help me clear this? Thank you a lot!
It would be cool if Log2Ram could use a Zram disk.
Its likely that many if using Log2Ram prob have Zram also.
Just a suggestion to make ram footprint tiny whilst increasing log size.
It would mean just a 2nd Zram disk that is likely to get 3:1 compression with little perf hit.
Hi,
Thanks for this piece of code.
I wanted to try to install this script on Ubuntu 17.10 Unfortunately the install program exited because there is no (on newest Ubuntu) /usr/local/bin. So I created /usr/local/bin and the installation has been completed.
But still cannot run RAM Disk. I can see in the syslog:
Feb 16 19:47:20 akacja log2ram[808]: ERROR: RAM disk too small. Can't sync.
Feb 16 19:47:20 akacja log2ram[808]: mail: cannot send message: Process exited with a non-zero status
Maybe I should make symlink /usr/local/bin/log2ram to /usr/bin ?
Or maybe there are other ways to solve my issue?
TIA and regards,
Regards
Hello,
I am trying to install log2ram no error messages in the installation, but unable to run log2ram
"ERROR: RAM disk too small." Can not sync. " here is the result of df -h
/dev/root 7.1G 1.7G 5.2G 25% /
devtmpfs 458M 0 458M 0% /dev
tmpfs 462M 4.0K 462M 1% /dev/shm
tmpfs 462M 6.5M 456M 2% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 462M 0 462M 0% /sys/fs/cgroup
/dev/mmcblk0p1 63M 21M 42M 34% /boot
How to fix this problem ?
I will prob just create a background shell script for the write function that is started on start and killed on stop.
This will replace the cron and also using service reload even though it works it is not really correct.
Hi, i am on a fedora workstation current release and thought it is a good idea to use log2ram (ssd as boot disk).
On a fresh install of log2ram i experience problems while booting. sssd, auditdm, firewalld, gdm a.s.o can not write to log because of permission denied. Permissions are default. An restorecon -rv /varlog/ (and /var/log/hdd.log) fixes selinux permissions. The daemons can start after that.
What do i miss?
The log2ram.log is increasing in size and does not get cleaned... So after some time is will eat all available space.
My suggestion is to include it in an logrotatescript or handle it internally. Sample (Bash):
if [[ -e "$LOG" ]] ; then # Log-Datei umbenennen, wenn zu groß
FILESIZE="$(stat -c %s "$LOG")"
[[ $FILESIZE -ge $MAXLOGSIZE ]] && mv --force "$LOG" "${LOG}.old"
fi
I hadn't idea where else to ask, so I will ask here.
I understand the logic of log2ram and the idea is super.
But, I can not realize what will happen when the partitin of (for example of 40MB) will be filled!?
I have some app what doing heavy loging and it fill in few hours whole log partition.
I understand that that log partiton is written to SD/HDD with crond every hours, but my loging stop when partiton is full.
OK, the solution is to edit the conf file and reserve more MB for it, but is there any solution that if it reach for example 90% to write it to SD/hdd log and CLEAR/FORMAT log2ram partiton to prevent stop of loging?
Thx for suggestion!
Sometime, when the system boot fast, the copy of log append before mount are effective.
The mount command is async. I didn't knew that !
I need to test my fix ! Please wait few days !
The README says we should run two commands to test if log2ram is really working.
This is my output, how do I know it is working as intended?
root@MyCloud:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.9G 566M 1.3G 32% /
devtmpfs 114M 0 114M 0% /dev
tmpfs 114M 0 114M 0% /dev/shm
tmpfs 114M 4.7M 109M 5% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 114M 0 114M 0% /sys/fs/cgroup
tmpfs 64M 0 64M 0% /tmp
/dev/sda4 2.7T 1.5T 1.2T 56% /data
tmpfs 23M 0 23M 0% /run/user/0
root@MyCloud:~# mount
/dev/md0 on / type ext3 (rw,noatime,nodiratime,errors=remount-ro,user_xattr,acl,commit=60,barrier=1,data=ordered)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=116056k,nr_inodes=29014,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime,size=65536k)
/dev/sda4 on /data type ext4 (rw,noatime,nodiratime,errors=remount-ro,user_xattr,commit=60,barrier=0,data=writeback)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=23228k,mode=700)
Hi,
When trying to install in a system with the old system ( /var/log.hdd
) it fails to remove it ( this line ) with
rm: cannot remove '/var/log.hdd': Device or resource busy
Probably needs to be umounted and rebooted first. Could create a PR if we agree on a way to do it.
I asked some questions related to log2ram on the systemd-devel mailing list. The thread starts at https://lists.freedesktop.org/archives/systemd-devel/2019-March/042312.html and there are two main suggestions:
(1) I was asking about how to allow journald to continue logging to the SD card. journald is intelligent about reducing writes but will try its best to write emergency messages during a crash, so these want to be written directly to persistent store, not buffered by log2ram. The suggestion by Colin Guthrie is to use a bind mount. This would be a nice feature to add.
(2) I mentioned the Before=systemd-journald.service dependency and Leonard Poettering advised that it is better to use Before=systemd-journal-flush.service
I also notice the Before=apache2.service dependency and have a question about it. Why is it there? Is it just one example of a service that must not be started until the RAM log directories are prepared? What about other services that write logs, such as redis or mysql? Does the service file need customising on every machine where it is run or is there some other technique could be used?
The service reload via the hourly cron job causes sync to disk on each hour
There is an absence of any check to drive usage and from the cp command the whole $RAM_LOG is written out to HDD.
Isn't this likely to cause vastly more HDD writes than just appending to normal logs in many situations?
It completely lacks any form of control, probably should run more frequently but with logical tests if log usage necessitates a sync, but just to overwrite everything each hour?
`syncToDisk () {
isSafe
if [ "$USE_RSYNC" = true ]; then
rsync -aXWv --delete --links $RAM_LOG/ $HDD_LOG/ 2>&1 | $LOG_OUTPUT
else
cp -rfup $RAM_LOG/ -T $HDD_LOG/ 2>&1 | $LOG_OUTPUT
fi
}`
I guess somrthing like sudo sed -i.bak '/^\srotate ./i olddir /var/log/oldlog' inputfile
But I am far from accomplished with sed
I installed log2ram on a freshly installed Stretch Lite on a Raspberry Pi 3B, yesterday evening.
# [2018-05-05 22:28] maxg@rpi31 ~/downloads/log2ram-master $
chmod +x install.sh && sudo ./install.sh Created symlink /etc/systemd/system/sysinit.target.wants/log2ram.service → /etc/systemd/system/log2ram.service.
df -h
shows that /var/log has grown from:
log2ram 40M 2.7M 38M 7% /var/log
to
log2ram 40M 3.5M 37M 9% /var/log
checking the log with
cat /var/log/log2ram.log
returns noting; checking the file size of the log reveals 0 bytes:
ls -l /var/log/log2ram.log
-rw-r--r-- 1 root root 0 May 5 22:31 /var/log/log2ram.log
ls -la /etc/cron.hourly/
-rwxr-xr-x 1 root root 44 May 5 22:29 log2ram
cat /etc/cron.hourly/log2ram
\#!/usr/bin/env sh
systemctl reload log2ram
Any hints what I can further check or do to get log file entries?
Hello,
I'm now using log2ram for several weeks on my raspberry pi.
I noticed that when I restart my raspberry, I have a failure :
systemctl --failed
shows console-setup.service
.
I suggest that you add console-setup.service
in log2ram.service
in the line Before=
.
I did it : it works for me.
Regards,
Yves
Hi, I use armbian on cubietruck(a20). Since the the last update log2ram complained for folder /var/log.hdd/ which I created. Since then, I receive daily the email below from root.
In order not to receive this email, as a workaround, I included > /dev/null 2>&1 in the crontab for the daily job. Nevertheless, this workaround affects all daily jobs.
What is the correct way to stop receiving daily emails from log2ram if there is no error?
Thank you
/etc/cron.daily/log2ram:
sending incremental file list
./
alternatives.log
auth.log
daemon.log
dpkg.log
kern.log
lastlog
mail.info
mail.log
messages
minidlna.log
mysql.log
mysql.log.1.gz
mysql.log.2.gz
mysql.log.3.gz
mysql.log.4.gz
mysql.log.5.gz
mysql.log.6.gz
mysql.log.7.gz
syslog
syslog.1
syslog.2.gz
syslog.3.gz
syslog.4.gz
syslog.5.gz
syslog.6.gz
syslog.7.gz
wtmp
apache2/
apache2/access.log
apache2/access.log.1
apache2/access.log.10.gz
apache2/access.log.11.gz
apache2/access.log.12.gz
apache2/access.log.13.gz
apache2/access.log.14.gz
apache2/access.log.2.gz
apache2/access.log.3.gz
apache2/access.log.4.gz
apache2/access.log.5.gz
apache2/access.log.6.gz
apache2/access.log.7.gz
apache2/access.log.8.gz
apache2/access.log.9.gz
apache2/error.log
apache2/error.log.1
apache2/error.log.10.gz
apache2/error.log.11.gz
apache2/error.log.12.gz
apache2/error.log.13.gz
apache2/error.log.14.gz
apache2/error.log.2.gz
apache2/error.log.3.gz
apache2/error.log.4.gz
apache2/error.log.5.gz
apache2/error.log.6.gz
apache2/error.log.7.gz
apache2/error.log.8.gz
apache2/error.log.9.gz
apt/history.log
apt/term.log
mysql/
mysql/error.log
mysql/error.log.1.gz
mysql/error.log.2.gz
mysql/error.log.3.gz
mysql/error.log.4.gz
mysql/error.log.5.gz
mysql/error.log.6.gz
mysql/error.log.7.gz
unattended-upgrades/unattended-upgrades-dpkg.log
unattended-upgrades/unattended-upgrades.log
sent 5,106,397 bytes received 1,360 bytes 3,405,171.33 bytes/sec
total size is 9,042,498 speedup is 1.77`
Hi again!
I tried to manually run log2ram with parameter (start/stop/write) and saw:
/usr/local/bin/log2ram: 35: /usr/local/bin/log2ram: [[: not found
P.S.: It works perfectly when i changed #!/bin/sh to #!/bin/bash
I've just set this up on a couple of old Cubox 1's running Arch Linux. If you could add onto the README.md that Arch uses systemd/Timers by default but installing a cronie as a package 'pacman -S cronie' followed by 'systemctl enable cronie', adds cron timers. All works fine.
Thanks for the useful utility.
After I installed log2ram, console autologin fail. I have tried to config raspi-config again and again, it still ask me to enter password. After I stop and disable log2ram service, autologin works again. How can i fix this problem? Thanks for any help.
Hi.
It seems to be an error in conditions in log2ram:
line 22
if ["$USE_RSYNC" = true]; then
and line 32
if ["$USE_RSYNC" = true]; then
I think it should be
if [ "$USE_RSYNC" = true ]; then
Thanks!
Hi,
thanks for this great tool.
Currently, I use ramlog on Rasbian Wheezy systems with init.d
Is there a way I could use log2ram on init.d-systems as well?
Maybe you could provdie an init-script for older systems.
(ramlog has not been updated for years, and sometimes making trouble on my raspberrys...)
Greetings,
Heiko (Germany)
If something goes wrong in log2ram's execution, are those error messages (from cp
and rsync
) stored anywhere or are they lost to the ether? It seems like they're lost.
log2ram should redirect the copy's output directly to the disk (to "${HDD_LOG}log2ram.log"
) so that any errors in log2ram can be diagnosed. That won't help if the HDD_LOG variable is messed up, but it'll help in all other situations.
Hello,
Do you have an uninstall.sh script ? (*)
There are 2 ideas for this question :
1/ I'm working with Ansible (not install.sh). The idea is to write the playbook and test it, uninstall, re-test it and so on.
2/ maybe it could become a debian package for Raspbian ?
(*) yes, such a question here sounds weired :)
when enabling rsync in the conf it is using the -W switch to copy fiels instead using the delta algo:
-W, --whole-file copy files whole (without delta-xfer algorithm)
Is there any reason for that. If not using -W it will write less data because of inteligent delta sync.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.