Earth Notes: On Website Technicals (2017-09)
Updated 2024-02-24.DOCTYPE
, though purifyCSS and UnCSS, to the OnDemand Governor. And meeting my old friend ImageMagick again!2017-09-24: OnDemand Governor Tweaks
In the power manager code for this server I have changed the battery-level threshold where the CPU speed governor is adjusted (up_threshold
) from VLOW to LOW. I think that race-to-idle is a better strategy for conserving energy where there is lots of I/O, until desperate. The RPi is only spending a few percent (<4%) at the low clock speed anyway, presumably because of the continuous flow of inbound requests across the Net, good and bad.
Current governor parameters:
:::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load :::::::::::::: 0 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/io_is_busy :::::::::::::: 1 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/powersave_bias :::::::::::::: 0 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor :::::::::::::: 50 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate :::::::::::::: 100000 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate_min :::::::::::::: 10000 :::::::::::::: /sys/devices/system/cpu/cpufreq/ondemand/up_threshold :::::::::::::: 60
When the battery is not low it may be better to reduce sampling_rate
from the the current 100ms to reduce latency responding to a new inbound (HTTP) request when the system is otherwise idle. It may also be useful to reduce sampling_down_factor
when the battery is low to get back to a lower CPU speed sooner.
Manually setting the sampling_rate to 10000 and sampling_down_factor to 10 causes the fraction of time spent at the lowest CPU clock rate to rise. This also seems to result in kworker
instances visible in top
's output much more often.
So I now also have the power manager adjust the sampling_rate
and sampling_down_factor
governor parameters at the LOW threshold, with quicker speed-up and default run-on times above, switching to the default gentler speed-up and quicker slow-down when below. (A fast sampling_rate
of 20000 (20ms), and a sampling_down_factor
of 5 seem reasonable. For perspective note that the delay across the site's FTTC connection is ~8ms, RTT ~16ms; RTT between California (eg source of much Google crawling) and London is ~130ms.)
I am seeing power system power draw of ~830mW when the battery is LOW, cf ~920mW+ minimum before tweaking. This system continues to deal with quite a high volume of NTP, DNS, and other inbound requests, maybe ~100 packets/sec.
Having seen a significant CPU load (for example) from sshd services during run-of-the-mill automated break-in attempts, I adjusted ssh to be slightly less exposed.
I am seeing now around 20% (but very variable) of time at the lowest CPU speed with sampling_down_factor
at 5; near 50% at 2 or 1, though with time in kworkers
rising from about 0.3% to 0.7% maybe from the extra work being done by the governor. In the latter case power can drop below 800mW intermittently, though page delivery as seen by WebPageTest and Pingdom can be slightly slower than all-default settings, including TTFB. However it can be fairly variable in all cases.
This sample of power readings is with conserving / non-responsive settings active in OK rather than LOW state:
2017/09/27T00:00:07Z AL 0 B1 12629 B2 -1 P 859 BV 12600 ST OK - t A1P 0 B1T 19 2017/09/27T00:10:06Z AL 0 B1 12647 B2 -1 P 860 BV 12591 ST OK - t A1P 0 B1T 19 2017/09/27T00:20:06Z AL 0 B1 12656 B2 -1 P 798 BV 12600 ST OK - m A1P 0 B1T 19 2017/09/27T00:30:06Z AL 0 B1 12659 B2 -1 P 899 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T00:40:06Z AL 0 B1 12656 B2 -1 P 798 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T00:50:06Z AL 0 B1 12656 B2 -1 P 861 BV 12609 ST OK - m A1P 0 B1T 18 2017/09/27T01:00:06Z AL 0 B1 12650 B2 -1 P 684 BV 12609 ST OK - m A1P 0 B1T 18 2017/09/27T01:10:06Z AL 0 B1 12647 B2 -1 P 797 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T01:20:06Z AL 0 B1 12647 B2 -1 P 797 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T01:30:06Z AL 0 B1 12644 B2 -1 P 860 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T01:40:06Z AL 0 B1 12629 B2 -1 P 859 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T01:50:06Z AL 0 B1 12626 B2 -1 P 796 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T02:00:06Z AL 0 B1 12626 B2 -1 P 834 BV 12600 ST OK - m A1P 0 B1T 18 2017/09/27T02:10:06Z AL 0 B1 12626 B2 -1 P 922 BV 12572 ST OK - m A1P 0 B1T 18 2017/09/27T02:20:06Z AL 0 B1 12626 B2 -1 P 834 BV 12581 ST OK - m A1P 0 B1T 18
I am still making small tweaks to parameters and the battery SoC thresholds that they trigger at...
2017-09-23: Thicker Heroes
I have adjusted the columnar home-page display to autogenerate hero images with a nearer-square aspect ratio, which looks better and because of the small width does not take too much vertical space. This may help generate in-page floating hero images in future too.
2017-09-20: Response Time
By eye, average response time (and volatility) has been dropping as hoped since various interventions started circa mid-July. This is for the main (www) site.
It is just possible that I am seeing slightly increased download times over the last few days, given tweaks to the energy-saving thresholds in the RPi server.
2017-09-16: Even Smaller: UnCSS
A quick look at UnCSS (V0.15.0) as an alternative to purifycss
shows that it seems to look for CSS class use in a way that may result in fewer false positives than purifycss.
Fetched with npm install uncss -g
for Mac and RPi.
Note that UnCSS cannot currently handle inline css.
Cobbling together a simulated test homepage HTML offline with the correct inserts, and faked stylesheet links, and correct ignored (whitelisted) classes (note the different format to purifycss) results in:
% uncss -m screen,print -i .container,.sidebar,.sml,.noprint .test.html /*** uncss> filename: img/css/base-20170906.css ***/ h1,h2{font-family:sans-serif}.sidebar{color:#000;background-color:#cfc;margin:1em 0}.sml{font-size:small}.pgdescription{font-weight:700;margin:1em 0}.resp{max-width:100%;height:auto}.respfloatr,.respfloatrsml{float:right;clear:right;height:auto;margin:1em 0 1em 1.5em}.respfloatrsml{max-width:33%}.respfloatr{max-width:50%}@media screen and (min-width:640px){.container{max-width:800px;margin:auto}}@media print{.noprint{display:none!important}} /*** uncss> filename: img/css/col-20170912.css ***/ .colcontainer{width:100%;margin:0 auto;text-align:center}.colD{display:inline-block;margin:2px;padding:2px;text-align:left;vertical-align:top}.colD{width:48%;max-width:310px}@media screen and (max-width:639px){.colcontainer{max-width:320px}.colD{width:100%;max-width:310px}} /*** uncss> filename: img/css/desktop-20170906.css ***/ @media screen and (min-width:800px){.container{font-size:1.2rem}}
So about 100 bytes smaller than the purifycss (890-byte) minified CSS once UnCSS's own comments are removed:
h1,h2,h3{font-family:sans-serif}.sidebar{color:#000;background-color:#cfc;margin:1em 0}.sml{font-size:small}.pgdescription{font-weight:700;margin:1em 0}.resp{max-width:100%;height:auto}.respfloatr,.respfloatrsml{float:right;clear:right;height:auto;margin:1em 0 1em 1.5em}.respfloatrsml{max-width:33%}.respfloatr{max-width:50%}@media screen and (min-width:640px){.container{max-width:800px;margin:auto}}@media print{.noprint{display:none!important}}@media screen and (min-width:800px){.container{font-size:1.2rem}}.colcontainer{width:100%;margin:0 auto;text-align:center}.colD,.colT{display:inline-block;margin:2px;padding:2px;text-align:left;vertical-align:top}.colD{width:48%;max-width:310px}.colT{width:31%;max-width:200px}@media screen and (max-width:639px){.colcontainer{max-width:320px}.colD,.colT{width:100%;max-width:310px}}@media screen and (min-width:640px){.colT{max-width:250px}}
Initial inspection shows that UnCSS has removed the (unused) h3
and colD
support correctly for example.
Since the HTML can be presented on stdin I would probably need to run UnCSS something like:
% cat source.html insert.html insert2.html | \ uncss -t 0 -n -m screen,print -s img/css/base-20170906.css,...,img/css/desktop-20170906.css -i .container,.sidebar,.sml,.noprint
(The -t 0
is to discourage UnCSS from trying to load and run the embedded ad scripts; I don't fiddle with css in JS.)
Although -n
eliminates the UnCSS banners, it still leaves wasteful newlines and blank lines in the output, which for a minifier is bad!
A tighter test suggests that there are ~40 bytes to be saved on the home page (and ~30 bytes on the official smallest page):
% cat source.html insert.html insert2.html | uncss -t 0 -n -m screen,print -s img/css/base-20170906.css,...,img/css/desktop-20170906.css -i .container,.sidebar,.sml,.noprint | wc -c 851 % egrep '^h1,h2' index.html | wc -c 891
CRP
All these savings are in the critical rendering path, so potentially very well worth having...
Replacing purifycss with UnCSS, with the latter's lack of false positives, reduces the size of the compressed and uncompressed smallest page size by about three bytes (the compressed mobile page size is now 1357 bytes).
The total saving of UnCSS for the uncompressed desktop homepage is 260 bytes and the compressed page 94 bytes. That's lots of extra words in the first TCP packet, potentially.
Home Page Shrinkage
For the home page the reduction in the compressed desktop page over purifycss is 13 bytes, and for the mobile page 15 bytes, the uncompressed pages having come down by respectively 45 and 47 bytes.
Not quite as much shrinkage as initial indications, but still worthwhile.
Inspecting outputs by eye I note that like purifycss, UnCSS has correctly avoided stripping out fallback CSS code such as .fullwcdisp{width:100%;width:100vw;
... for older browsers. Good.
I haven't measured it (yet), but UnCSS seems to be significantly slower to run than purifycss. Maybe that's the cost of increased precision.
Note that, for example, Chrome's CSS coverage report for the home page still claims ~25% unused bytes. (The number goes down a bit if the screen width is adjusted to trigger more of the media queries.) Partly because separate rules for the same selector are not being merged, etc, which would make sense after other selectors have been stripped.
For now I have set the wrapper script up to use UnCSS if available, providing that there appear to be no embedded scripts, else purifycss if available, else just cat
the CSS!
Using this to estimate the amount of (this, mobile) page that can be delivered in the first TCP frame over http (allowing ~290 bytes for HTTP header overhead) with this command, the output being fed to a browser:
dd < m/note-on-site-technicals-4.htmlgz bs=1160 count=1 | gzip -d
shows that quite a decent amount of text gets shown (headings, sidebar, a full para and some), and enough information for the browser to load the hero image.
Postscript
Fiddling with page layout, partly to get more body text into the first TCP packet in typical mobile pages, I've moved the hero image below the description/subhead text. In the above page I would not expect any particular change since that point has already been passed in the compression, but another seven-and-a-bit words (and a new para)
make it through! Mac and RPi.<p>Note that UnCSS cannot current
2017-09-13: Optional Tags
If everything else is in order (and there are no attributes) then nominally the html
, head
and body
tags are completely optional, and for example omitting the closing </body></html>
and newline saves 15 bytes from the uncompressed main page and 8 bytes (0.1%) from a typical gzipped version. They are already being stripped from the mobile page by the HTML minifier.
I agonised, slept on it, then removed the <head>
and <body>
opener tags too since they are on the critical rendering path and might admit another word or two into the first packet instead when gone. The HTML minifier is already striping them from the mobile page, and the context (the first immediately followed by a meta
tag and the second by a nav
tag is safe I think.
The W3C verifier is happy with the resulting pages, but there is a faint risk that a real browser will choke somehow, in which case I may put at least the openers back in.
2017-09-12: Paradise Lost
It seems that purifycss
is breaking something, eg the index page carousel remains stuck in a single-column format in Firefox and Chrome, so I have temporarily disabled this minification step.
Paradise Regained
Hmm, I can't seem to reproduce the problem. It may be some non-deterministic behaviour such as random re-ordering of CSS rules (my CSS is probably too fragile). Anyhow, it's all on again for now.
You Spin Me Round Like a Record
I have just seen two pages (m and www) with empty style blocks in the header, which makes for an odd-looking (though not impossibly-broken) view. Maybe there is something non-deterministic afoot. Maybe it is not safe to run copies of purifycss
in parallel for example. For the moment I have put in an extra check to abort a page build if an empty minified inline CSS block is produced.
2017-09-11: CSS Static Analysis: purifycss
For all the site's main pages bar one, CSS is small and inlined, and other than a very few very common classes, CSS support is selectively included so as to keep the header before real content starts to not much more than ~1kB. The base CSS is small at 633 bytes.
Even then, not every part of each CSS chunk is needed in all pages, and some depends dynamically on what the wrapper has used and even build-time page includes.
Chrome's built-in developer tools have been whining about unused CSS.
So I am wasting bandwidth and browser/phone CPU time that I need not...
Pure Npm
Enter purifycss and its ability to do static analysis on the CSS usage on an HTML page.
An incantation to the effect of npm install purify-css -g
works to get the purifycss command (V1.2.5) available on both Mac and RPi in the twinkling of an eye. (A 'rehash
' for the tcsh
shell I like helps too.)
With only a small amount of smartness in my wrapper script, I am now running purifycss
to strip out unused CSS from whatever I would otherwise be inlining. I give it the raw HTML source before my initial preprocessing (including any dynamically-generated insert) to see what CSS is being used.
For the current smallest real (mobile) page the uncompressed size drops from 4285 to 4063 bytes, and gzipped from 1606 to 1522 bytes, thus ~5%, arising from an inlined raw CSS size reduction of ~35%, using purifycss
.
Likewise the main (www.) uncompressed and gzipped home pages go from:
18621 index.html 7411 index.htmlgz
to:
18332 index.html 7316 index.htmlgz
And mobile:
15372 m/index.html 6254 m/index.htmlgz
to:
15085 m/index.html 6156 m/index.htmlgz
So a typical home page visitor is seeing a reduction of ~1%, which is probably another line or two of text in the first packet too, and less redundant CSS to be chewed through by the browser.
According to Chrome's CSS coverage tool, the main page unused CSS bytes are reduced from 502 (45.3%) to 243 (29.7%). (Note that I did not resize the page or anything that might have exercised some of the media queries, which may be relevant.) I can still see some things that could be removed and are not being so, but it does not yet seem that anything vital is being removed, ie that purifycss
is safe and conservative. Also it seems to preserve fall-back CSS (for older browsers) correctly.
Almost all pages show some CSS reduction, ~15% on average. Given that this is all on the critical rendering path (CRP) to getting the first text and images displayed for the visitor, this seems worth doing.
Each page build on the RPi is now taking significant time, especially for mobile, given multiple levels of minification and then compression. Still bearable, but some of the work is confined to mobile pages only, partly to retain my sanity. Now zopfli is looking positively nippy! Also make -j4
is my friend for my multi-core RPi...
I can easily disable this filtering in the wrapper script if there is a problem.
Late News: Version
This issue indicates that there might be some version confusion.
And indeed, having installed purify-css
, note the dash, purifycss -v
yields 1.2.5. A replacement may be wise, though it turns out that purify-css
is meant to be the canonical one.
% sudo npm uninstall purify-css -g removed 94 packages in 0.894s % sudo npm install purifycss -g % purifycss -v 1.2.6
Later News: Bug
purify-css
seems to be erroneously retaining classes based on case-insensitive matches on text in HTML comments and text.
I renamed the 'cols' CSS file mentioned in such a comment to avoid retaining its own colS
class incorrectly. Bang goes another ~40 bytes from the uncompressed output HTML/CSS for the home page, though oddly only 0 or 1 bytes off the gzipped output it seems.
Even Later News: Single Packet Page
A little more fiddling to reduce weight and the gzipped smallest mobile page now weighs in at 1358 bytes; nominally small enough to fit in a single typical (IPv4) TCP frame, though there's probably ~200 bytes overflow into a second frame because of HTTP headers as noted before. With brotli the compressed size drops to 1086 bytes, which while nominally small enough for that single frame, will be wrapped in probably smaller HTTP/2 headers but also probably an extra few kB of new-connection TLS/https overhead.
2017-09-10: DOCTYPE Lowercased, Progressive JPEGs, Guetzli
Just as converting the "UTF-8" to "utf-8" in the meta charset
tag saved a few bytes in the compressed output for free, it turns out that doing the same for the leading DOCTYPE
saves bandwidth too. Before:
18663 index.html 7427 index.htmlgz
After converting DOCTYPE to lowercase in spite of some minor worries (though note that eg google.com and lite.cnn.io
do it too), four bytes are saved in the compressed form:
18663 index.html 7423 index.htmlgz
This change knocks five bytes off the compressed form of the smallest www page, but just 3 from the mobile version (down to 2027 and 1603 bytes respectively), but seems to knock 27 bytes off the largest (www) page, down to 44346 bytes compressed. All small absolute savings, but with no obvious penalty.
I note that brotli's static dictionary does seem to have an entry for <!doctype html>
, while it doesn't have one for the quoteless attribute value meta charset
tag.
JPEG Heroes
I am experimenting with making all auto-generated JPEG hero banner images progressive, to help get some of the image over slow channels sooner. I am not expecting any increase in size nor equivalently reduction in final image perceived quality for a given ceiling byte size. (There is some evidence that users don't like/understand progressive updates, and not all browsers may display them progressively anyway, but this change is easy to revert if a problem.)
Where this would be most likely to reduce perceived image quality per byte is for the smaller 'mobile' images, at least according to the 10kB rule of thumb, but I still like delivering something visible quickly on slow connections. So I'll take that risk.
With WebPageTest simulating load of one of the mobile pages but over dialup (~50kbps) the text starts usefully rendering in ~1.3s (complete above the fold after another 0.1s) and the banner has a full initial image about 1s later, complete after ~4.1s, for a speed index of 1809. Note that the ad loading takes the full page load up to 70s (!) but the user has something visually complete above the fold very quickly even on this olde-worlde bandwidth.
When I get there, progressive JPEG should also play nicely with HTTP/2.
Guetzli Dreaming
OK, while I am having fun with JPEGs... I just ran across Guetzli from Google, which attempts to provide better compression than usual within the widely-supported JPEG format, at the cost of a lot of CPU compression time.
Guetzli is a JPEG encoder that aims for excellent compression density at high visual quality. Guetzli-generated images are typically 20-30% smaller than images of equivalent quality generated by libjpeg. Guetzli generates only sequential (nonprogressive) JPEGs due to faster decompression speeds they offer.
As with zopfli and zopflipng, this enables existing clients to get the benefit of the better algorithm with no changes required.
For static Web images (that only need to be compressed once, but decoded many times) this can be a real win.
Guetzli has a number of limitations, and is profligate with memory and CPU at the moment, so even more than brotli this is an experiment for later, not for production any time soon, other than maybe the occasional critical image, manually prepared.
Again for me on my Mac, brew does the business with brew install guetzli
.
Note that Guetzli is designed to work on high quality images. You should always prefer providing uncompressed input images (e.g. that haven't been already compressed with any JPEG encoders, including Guetzli). While it will work on other images too, results will be poorer.
So as a simple test I am converting an existing medium-sized PNG to JPEG with ImageMagick (assumed libjpeg
inside doing the work) and with Guetzli at highish 'quality', noting that each tool's notion of that quality may be different. Note that guetzli will not allow a value less than 84 currently.
% ls Spacetherm-aerogel-edge.png 58444 Spacetherm-aerogel-edge.png % file Spacetherm-aerogel-edge.png Spacetherm-aerogel-edge.png: PNG image data, 375 x 356, 8-bit colormap, non-interlaced % convert -quality 90 Spacetherm-aerogel-edge.png IM.jpg % guetzli --quality 90 Spacetherm-aerogel-edge.png G.jpg % ls 49805 IM.jpg 39770 G.jpg
Images compared side-by-side.
The guetzli and ImageMagick images do look slightly different, but one is not necessarily better than the the other, and the guetzli image is significantly smaller and took much longer to produce.
See JPEG Compression with Guetzli for a more thorough test, including tools such as jpeg-recompress
, and where it is pointed out that guetzli is intended for and "highly effective in situations where quality is the paramount concern," but in general this site is not too hung up on photo-realism.
Meanwhile Guetzli vs MozJPEG: Google's New JPEG Encoder Is SLOOOW! reports that:
- MozJPEG files had fewer bytes 6 out of 8 times
- MozJPEG and Guetzli were visually indistinguishable
- MozJPEG encoded literally hundreds to a thousand times faster than Guetzli
- MozJPEG supports progressive loading
Promising anyhow: maybe achieving a compression ratio with JPEG similar to Google's thinly-supported WebP.
2017-09-08: Mobile Traffic Fraction and Brotli Dreaming
Google's AdSense stats for this site (which may be skewed by ad blockers more heavily deployed on desktops) imply that a little under half my traffic is from desktops, one third mobile and the rest tablets. Even so, only a few percent of (human) traffic is ending up on the mobile (m.) site. I really don't want to force dynamic redirects.
Interestingly, the ~1000 hits from a link or two on Hacker News over the last couple of days seem to have a lower desktop fraction, with mobile nearly level-pegging. HN reader mean age is ~25 at a guess.
Brotli
It may be a while before I can serve brotli-compressed files to visitors, partly because clients will only request them over https, and my Apache doesn't support either, nor indeed HTTP/2, until a version newer than it is easy to incrementally upgrade the current system to.
Anyhow, a girl can dream. On my Mac I ran brew install brotli
and lo and behold for the mobile home page (size is first column):
% bro --quality 11 --input index.html --output index.htmlbr % ls 15443 index.html 5201 index.htmlbr 6201 index.htmlgz
The mobile pages have already had some thorough HTML minification applied, so for comparison, the (less-minified and generally fatter) desktop home page:
18633 index.html 6147 index.htmlbr 7338 index.htmlgz
The .htmlgz
version was compressed by zopfli
; compression with gzip -6
yields a size of 6349 bytes. (Incidentally, xz -9
only manages to compress to 6176 bytes.)
For the currently-largest (zopfli) pre-compressed page for mobile:
117199 OpenTRV-archive.html 36785 OpenTRV-archive.htmlbr 42903 OpenTRV-archive.htmlgz
For the currently-smallest (zopfli) pre-compressed page for mobile, the brotli-compressed form may be small enough to send in a single TCP packet including HTML, HTTP/2, TCP, IP (and Ethernet) headers, ie within 1460 bytes:
4285 OpenTRV-protocol-discussions-201412-3.html 1288 OpenTRV-protocol-discussions-201412-3.htmlbr 1606 OpenTRV-protocol-discussions-201412-3.htmlgz
So, ~15% saving in transmitted bytes over zopfli, and thus maybe ~18% over on-the-fly gzip, but no serve-time latency or significant CPU.
For comparison, a large text log file already in the data area, compressed with gzip (probably -9
) was recompressed with brotli (-11) and xz
(-9):
% ls -al data/k8055-summary-200710-to-201407* 55561168 data/k8055-summary-200710-to-201407 4818315 data/k8055-summary-200710-to-201407.br 6639772 data/k8055-summary-200710-to-201407.gz 3916916 data/k8055-summary-200710-to-201407.xz
It's to be expected that xz
(LZMA2) comes out top, but brotli is no slouch for this non-HTML text either.
It is likely that brotli will be much faster than zopfli for HTML page precompression too, so it will be all good when I get there. Note that I will likely generate bothgz
and br
precompressed versions, but the incremental code of brotli shouldn't hurt.
2017-09-07: Optimised Ad Injections
When injecting a second-or-subsequent standard Google Adsense ad block into a page, it is not necessary to repeat the async script line, which saves a few bytes of HTML, but also avoids at least some browsers actually loading the script twice, saving more bandwidth and CPU!
I made a small change in the page generation script to do this automatically.
I am also now wrapping the ads in aside
tags rather than pain old div
, hoping for better semantic separation of ads from content.
2017-09-06: Tired Old Eyes
I just turned 50 and find the default font size in Chrome and Safari desktop browsers silly small, but on various mobile/tablet browsers just fine. And I like walls of text and tables! I am loath to override carefully-chosen defaults, but evidence is that readability goes up with font size and down with age, and quite a few of this site's readers are likely to be not-quite-spring-chickens. So I am trying this to boost text size a little on wider screens (injected into the desktop pages' CSS only):
@media screen and (min-width:800px){.container{font-size:1.2rem}}
I tried a readability tool, eg so that I might avoid publishing text that is too complex or broken, possibly with a threshold set dependent on the target demographics of the page:
npm install readability-checker -g
Nice simple tool (thank you), but ironically the light-grey text is pretty unreadable, at least for me, in my white-background terminal.
2017-09-03: Autogen Banners Ahoy!
I am starting to auto-generate hero banners for .jpg
images. I note that the version 7.0.6-10
on my Mac seems to be able to generate much smaller files than the 6.7.7-10
on my RPi.
I can always selectively check-in 'better' ones from my Mac in the cache directory, or better as manually-curated images, to get round this if need be. I have done this for a couple of key pages.
In fact, it seems that the neat -define jpeg:extent=xxxx
option to limit (JPEG-format) file output size to the specified number of bytes if possible by (binary) seeking the 'quality' value is broken and can even hang on V6. So I detect the version and manually step down the 'quality' value instead of using jpeg:extent
for older than V7.
Note that all this pretty frippery pushes up page weight and number of connections needed for a first hit on the home page: ~6 connections and 85kB www, ~7 connections and 45kB m, measured with Chrome/WebPageTest.
2017-09-02: ImageMagick 20 years on
Twenty years ago, when my Gallery was a script-driven experimental babe-in-arms running on SunOS/Solaris, I used a neat portable tool called ImageMagick to handle automated processing of images, such as the generation of thumbnails. At some point I switched to doing everything in that site in Java (eg with JAI).
But here I am again, already using the ImageMagick identify
utility to extract image dimensions on Linux (on my RPi) installed as a package. (I am able to use the file
utility on macOS to do this job at the moment for JPG and PNG files.)
And I am just about to install ImageMagick on my Mac so as to be able to generate hero images automatically on it and Linux, which means in particular that I can test such autogeneration off-line!
Apple SIP
I am getting a bit of grief because on macOS 10.12 apparently I cannot set the DYLD_LIBRARY_PATH
environment variable for ImageMagick to find its own dynamically-loaded libraries, so for example I see:
% identify img/tradTRV.png dyld: Library not loaded: /ImageMagick-7.0.5/lib/libMagickCore-7.Q16HDRI.2.dylib Referenced from: /usr/local/ImageMagick-7.0.5/bin/identify Reason: image not found Abort
Messing around with DYLD_INSERT_LIBRARIES probably doesn't help either; macOS seems to strip out the DYLD_
values from the environment as a safety measure in System Integrity Protection.
I haven't had problems with executables needing dynamic libraries under /usr/local/lib
and indeed it seems that that directory may be special, eg already on the dynamic link path.
ImageMagick has a lot of libraries and some of them may clash with those already present in /usr/local/lib
:
ImageMagick-7.0.5 libMagick++-7.Q16HDRI.2.dylib libMagick++-7.Q16HDRI.dylib libMagick++-7.Q16HDRI.la libMagickCore-7.Q16HDRI.2.dylib libMagickCore-7.Q16HDRI.dylib libMagickCore-7.Q16HDRI.la libMagickWand-7.Q16HDRI.0.dylib libMagickWand-7.Q16HDRI.dylib libMagickWand-7.Q16HDRI.la libfreetype.a libfreetype.la libjpeg.a libjpeg.la liblcms2.a liblcms2.la libpng.a libpng.la libpng16.a libpng16.la libtiff.a libtiff.la libtiffxx.a libtiffxx.la pkgconfig
vs /usr/local/lib
's:
libpng.3.dylib libpng.3.dylib.dSYM libpng.a libpng.dylib libpng.la libpng12.0.dylib libpng12.0.dylib.dSYM libpng12.a libpng12.dylib libpng12.la libpng14.14.dylib libpng14.14.dylib.dSYM libpng14.a libpng14.dylib libpng14.la libpng15.15.dylib libpng15.15.dylib.dSYM libpng15.a libpng15.dylib libpng15.la libpng16.16.dylib libpng16.a libpng16.dylib libpng16.la
So as long as the png
libraries don't clash, then something similar to this incantation may work:
ln -i -s /usr/local/ImageMagick-7.0.5/lib/*.dylib* /usr/local/lib
The second time around, to confirm it was doing something, I got:
% ln -i -s /usr/local/ImageMagick-7.0.5/lib/*.dylib* /usr/local/lib replace /usr/local/lib/libMagick++-7.Q16HDRI.2.dylib? n not replaced replace /usr/local/lib/libMagick++-7.Q16HDRI.dylib? n not replaced replace /usr/local/lib/libMagickCore-7.Q16HDRI.2.dylib? n not replaced replace /usr/local/lib/libMagickCore-7.Q16HDRI.dylib? n not replaced replace /usr/local/lib/libMagickWand-7.Q16HDRI.0.dylib? n not replaced replace /usr/local/lib/libMagickWand-7.Q16HDRI.dylib? n not replaced
But no, I still get the 'Abort'.
Even a more comprehensive version did not help:
ln -i -s /usr/local/ImageMagick-7.0.5/lib/* /usr/local/lib
To see if a path is protected run ls -laO path
.
otool -L EXECUTABLE
will list the dynamic libraries needed by that executable.
When building a new binary, setting RPATH seems to work, but that doesn't immediately help me.
Homebrew
After hours and hours of what might be classed as fun, loosely, I gave up on that route (I am not prepared to turn off SIP completely), removed everything manually, and tried via Homebrew. I have used Homebrew before, but removed it when I had stopped using the tool that I installed it for. It is indeed easy to remove cleanly.
Having installed Homebrew, I then ran (all while on EuroStar's free WiFi, some of it while actually in the Channel Tunnel!):
$ brew install ImageMagick
... and a couple of minutes later I had a working install, hurrah!