Land usage
There are a pair of terms in the geographical sciences: land cover, and land use. They both apply to any area on the Earth's surface (and a hash can be within any given area on the Earth's surface, depending only on chance). The first says what is physically at a given place -- water, grass, bare rock, asphalt, whatever. The second says what humans are using that for (marine reserves, grazing land, mining, transport, recreation, and so on).
Geohashers care about this because it affects how easy it is to get to the hash and what you can do once you get there. Also, residents of a given graticule care because if they figure out what percentage of their graticule is what, they can figure out the proportion of times the hash will occur in a given category -- hence the tools below which use the palette an online map (by default the Peeron map which uses Google Maps [API v1 which currently results in an ugly overlay]) has assigned to various covers and uses.
See also osmwiki:landcover and osmwiki:landuse (and osmwiki:stylesheets and specifically osmwiki:CartoCSS#CartoCSS style for OSM.org's tiles if you want to adapt relet's tool to OSM).
Contents
user:relet's tool
Here's a small piece of code that allows you to calculate the land usage distribution in your graticule. It's a hack, and you should know how to interpret the results.
Usage
- Highlight your graticule in the peeron map.
- Make a screenshot (moar zoom = moar better).
- Save it in your preferred lossless image format (png is fine).
- Run the script below with the image file as a parameter.
Note: You may have to adapt the color values - read the next section. This may be the problem if you are getting no output.
Note on versions: As of 17 June, 2009, the module python-image which is required by this script only works with Python 2.6.
How it works
The script basically counts pixels on your screenshot. It has a list of colours which are used on the map for certain areas.
On the screenshot of your graticule, all pixels within the graticule highlight are slightly rosé, compared to everything else. Forests are green-rosé, Natural reserves are dark-green-rosé, bodies of water are blue-rosé, and so on. The script counts all pixels for which a meaning is given, and ignores everything else. Finally, it compares the count of each colour with the total count of identified pixels.
Note that Google uses antialiasing. Hence, the script will only recognize large areas of uniform colour, but not anything in-between. Also, due to some shading effects, several colours are used for the same type of area across the graticule. The colours given are some few examples for the rendering used in Germany - if for example your highways are rendered in a different colour, you may have to adapt it. You may also want to add your own. To do so, use the pipette tool in your favourite image editor and select a pixel in a large field of uniform colour. Copy the colour values for R(ed), G(reen), B(lue) . I have tried to compile a first list of used colours in the source code, please update this as needed.
As the maps overemphasize roads on smaller scales, their results will likewise be exaggerated. You may want to ignore them alltogether, or interpret the value as "being rather close to a road/highway".
Basically, the script currently differentiates between:
- Everything pale white: Uncharted land - usually: agriculture, wilderness, ...
- Everything light grey: Settled land - larger cities
- Everything dark grey: Restricted areas - Industrial, Military, Airports, ...
- Everything pale green: Forests
- Everything dark green: Natural reserves, parks, and golf courses.
- Everything blue: Water.
- Everything yellow: Larger roads, which you can still see on the smaller scales.
- Everything orange: Highways, motorways.
Depending on your graticule, the description you would want to use may differ. But it's usually easier to fix that after the calculation.
Example
./graticount.py berlin.png 37.03% Forests 29.14% Fields 11.23% Natural reserves 9.19% Roads 6.84% Settlements 3.84% Highways 2.12% Water 0.62% Industrial
Code
It's Python. Ready for take-off.
#!/usr/bin/env python import Image import sys colors = { (183, 205, 161): "Natural reserves", (183, 205, 162): "Natural reserves", (184, 206, 162): "Natural reserves", (185, 207, 163): "Natural reserves", (211, 215, 198): "Forests", (211, 215, 199): "Forests", (212, 215, 199): "Forests", (212, 216, 199): "Forests", (213, 217, 200): "Forests", (216, 208, 206): "Industrial", (216, 208, 207): "Industrial", (216, 210, 210): "Industrial", (218, 210, 218): "Industrial", (235, 224, 214): "Settlements", (235, 224, 215): "Settlements", (235, 224, 216): "Settlements", (235, 225, 216): "Settlements", (236, 225, 216): "Settlements", (237, 226, 217): "Settlements", (242, 195, 72): "Highways", (243, 195, 72): "Highways", (243, 196, 72): "Highways", (243, 196, 73): "Highways", (243, 197, 71): "Highways", (244, 196, 73): "Highways", (245, 197, 73): "Highways", (242, 233, 227): "Fields", (243, 233, 228): "Fields", (243, 233, 229): "Fields", (243, 234, 229): "Fields", (243, 235, 229): "Fields", (245, 235, 230): "Fields", (252, 241, 134): "Roads", (252, 241, 135): "Roads", (252, 242, 135): "Roads", (253, 242, 135): "Roads", (253, 243, 135): "Roads", (254, 243, 135): "Roads", (254, 244, 134): "Roads", (255, 244, 136): "Roads", (171, 185, 205): "Water", (171, 186, 206): "Water", (172, 185, 205): "Water", (172, 186, 205): "Water", (172, 186, 206): "Water", (173, 187, 207): "Water", (254, 132, 93): "Intracity Highways", } stats = {} counts = {} total = 0 results = [] image = Image.open(sys.argv[1]) for pixel in image.getdata(): stats[pixel] = (pixel in stats) and stats[pixel] + 1 or 1 for pixel, count in stats.iteritems(): if pixel[:3] in colors: counts[colors[pixel]] = colors[pixel] in counts and counts[colors[pixel]] + count or count for label, count in counts.iteritems(): total = total + count for label, count in counts.iteritems(): results.append((count * 100.0 / total, label)) results.sort(reverse = True) for result in results: print("%.2f%%\t%s" % result)
Dan Q's tool
Before he discovered that the above existed, Dan Q made a tool that uses a similar technique to estimate the amount of water covering a graticule. It's less-sophisticated but simpler if all you want to know is how likely a hashpoint in a given graticule is to land you in the drink.
Code
Javascript:
/* * More details can be found at: * https://danq.me/2018/08/21/graticule-water-coverage-calculator/ * * Given a graticule (e.g. 51 -1), returns the percentage water cover * of that graticule based on pixel colour sampling of OpenStreetMap * tile data. Change the zoomLevel to sample with more (higher) or less * (lower) granularity: this also affects the run duration. Higher * granularity improves accuracy both by working with a greater amount * of data AND by minimising the impact that artefacts (e.g. text, * borders, and ferry lines, which are usually detected as land) have * on the output. * * Expects a Unix-like system. Requires grep, wc, wget, and "convert" * (from the ImageMagick suite). And a Javascript interpreter (e.g. * node), of course. On a Debian/Ubuntu-like distro, all non-node * dependencies can probably be met with: * sudo apt install -y wget imagemagick * * (c) Dan Q [danq.me] 2018; no warranty expressed or implied; distribute * freely under the MIT License (https://opensource.org/licenses/MIT) * * Sample outputs: * $ node geohash-pcwater.js 51 -1 # Swindon, Oxford (inland) * ... * Water ratio: 0.68% * * $ node geohash-pcwater.js 49 -2 # Channel Islands (islandy!) * ... * Water ratio: 93.13% */ const { execSync } = require('child_process'); const lngToTile = (lng, zoom)=>(Math.floor((Number(lng)+180)/360*Math.pow(2,zoom))); const latToTile = (lat, zoom)=>(Math.floor((1-Math.log(Math.tan(lat*Math.PI/180) + 1/Math.cos(lat*Math.PI/180))/Math.PI)/2 *Math.pow(2,zoom))); const urlRange = (lng1, lng2, lat1, lat2, zoom)=>{ let urls = []; const x1 = lngToTile(lng1, zoom), x2 = lngToTile(lng2, zoom), y1 = latToTile(lat1, zoom), y2 = latToTile(lat2, zoom); for(let x = Math.min(x1, x2); x <= Math.max(x1, x2); x++){ for(let y = Math.min(y1, y2); y <= Math.max(y1, y2); y++){ const server = String.fromCharCode(Math.floor(Math.random() * 3) + 97); const url = `https://${server}.tile.openstreetmap.org/${zoom}/${x}/${y}.png`; urls.push(url); } } return urls; } if(process.argv.length < 4){ console.log('Syntax: node geohash-pcwater.js 51 -1 (where 51 -1 is your graticule)'); process.exit(); } const graticule = [process.argv[2], process.argv[3]]; const zoomLevel = 10; // OpenStreetMap zoom level; impacts granularity of sampling: each time you add one you'll approximately quadruple the number of tiles to download for a given graticule - 10 seems nice graticuleTop = (graticule[0][0] == '-' ? parseInt(graticule[0]) : parseInt(graticule[0]) + 1); graticuleBottom = (graticule[0][0] == '-' ? parseInt(graticule[0]) - 1 : parseInt(graticule[0])); graticuleLeft = (graticule[1][0] == '-' ? parseInt(graticule[1]) - 1 : parseInt(graticule[1])); graticuleRight = (graticule[1][0] == '-' ? parseInt(graticule[1]) : parseInt(graticule[1]) + 1); const images = urlRange(graticuleLeft, graticuleRight, graticuleTop, graticuleBottom, zoomLevel); console.log(`${images.length} images must be processed...`) let pxTotal = 0, pxWater = 0; for(let url of images){ // for each tile... console.log(`Fetching ${url}:`); execSync(`wget ${url} -qO tmp.png`); // use wget to download the tile console.log(' > extracting data'); execSync('convert tmp.png tmp.txt'); // use imagemagick to extract the data as text console.log(' > analysing') pxTotal += (parseInt(execSync('cat tmp.txt | wc -l').asciiSlice().trim()) - 1); // wc/grep the text pxWater += (parseInt(execSync('grep -Ei "#(abd3df|aad3df)" tmp.txt | wc -l').asciiSlice().trim())); // abd3df and aad3df are the hex codes for the two colours I've seen of water } pcWater = Math.round((pxWater / pxTotal) * 10000) / 100; console.log(`Water ratio: ${pcWater}%`);