Talk:Land usage

From Geohashing

That's nice. Neat idea. I like the "rather close to a road" interpretation.

Requests for scans

Can I make a request for Land Usage scans of a couple of graticules? I'd like someone to run Cape May, New Jersey again, and Fort Fisher, North Carolina would be much appreciated as well. "kthxbai" -- Jevanyn 18:39, 4 February 2009 (UTC)

Ooh, yes please, me too, if someone enjoyes running these things. I'd love to see Vancouver, British Columbia, Surrey, British Columbia, Victoria, British Columbia and Fort Nelson, British Columbia get this treatment. -Robyn 18:10, 17 June 2009 (UTC)
+1 for Victoria, British Columbia. --Wenslayer 20:23, 17 June 2009 (UTC)
I did all of the above requests just now. Obviously way later than asked for, but I just discovered the page. -Srs0 14:17, 27 August 2009 (UTC)
I just ran the script. You may wish to change some of the descriptions, as some would be "Mountains and Fields". I figured it was the numbers that people were requesting, and that they could change the names as necessary. -Srs0 16:13, 27 August 2009 (UTC)
After looking into it more, I don't know how accurate this script is for North America. With Vancouver as the example, the city is being registered as field, since the colour google maps is using for the field (and mountains) is the same as they're using inside the city. It's unfortunate, but I don't think that can really be fixed. Unless anyone has suggestions. Too bad... It definitely worked for Würzburg though, so it does work in at least Germany! -Srs0 16:28, 27 August 2009 (UTC)

Script note

I had to make a small change when running the script on my computer to account for the opacity (alpha values). Yes, they were all 255, but the following line resulted in a problem because it was trying to access a 4-tuple instead of a 3-tuple:

counts[colors[pixel]] = colors[pixel] in counts and counts[colors[pixel]]+count or count

My simple hack, since I only have a small amount of experience in Python and don't know the simple way to do this, was the following (a terrible hack, but it works):

pixelConcat=(pixel[0],pixel[1],pixel[2])
counts[colors[pixelConcat]] = colors[pixelConcat] in counts and counts[colors[pixelConcat]]+count or count

I'm certain that there's a better way to concatenate a tuple, but I don't know it (at least yet). Any input from people used to Python would be appreciated. If it matters, I'm using Python 2.5.2 and using the PNG format for my pictures.

Also interesting to note is that I got slightly different values for the Berlin graticule (which I tested against since Relet did that one first). I'm guessing it has to do with different magnifications, but I can't say for sure. The values I got were:

37.38%	Fields
32.36%	Forests
9.81%	Natural reserves
8.22%	Roads
6.98%	Settlements
3.45%	Highways
1.81%	Water
0.00%	Industrial

Well, that's all for now. Insight would be wonderful!-Srs0 16:12, 27 August 2009 (UTC)

Issues

Im not to sure why but I am lead to believe that this is flawed as Auckland isn't

  • 91.61% Water
  • 6.44% Forests
  • 0.92% Settlements
  • 0.59% Intracity Highways
  • 0.35% Industrial
  • 0.09% Roads
  • 0.01% Highways

So I removed the /100 (line 70) so I could see the raw amount of the 'units' and totaled these and removed this from the h*w of the input image and about 50% is still there, so its either discarding them or counting dual pixels and reporting it as one? so something is not going right (image is 733*912)


So after a little bit more testing I have come to the theory that when its looking at water its counting every unit thats there, but then it comes to anything else its missing bits and bob's, so because of this non counting or non reporting of the said land units we are getting massively skewed results namely 50% of the data isn't being recorded (that is assuming that 1 px is on unit thats reported when you remove the /100 as above)

The whole concept is broken

Now that someone calculated and published the numbers of all my surrounding graticules and I thus made me taking a closer look at them that way, I think the whole concept is broken:

  • Google does not distinguish between many very different land classes, e.g.
    • they use the same colour for "rocks", "fields", "our monkey forgot to colour" and "something, you don't want to know". As a result, those huge rocks called mountains in the swiss alps are classified as fields. You don't want to go hashing there without a helicopter.
    • In Germany, everything that's inside of a "Naturpark" is classified as "nature reserve". This is very misleading, as this kind of "park" is basically a large area mainly for tourism marketing, with little actual protection. As a result, e.g. the Pforzheim graticule is counted as almost 60% nature reserves, which is simply nonsense. In a nature reserve it would be forbidden to leave the ways, while the Black Forest is actually mostly open forest and well accessible.
  • Place names, names of roads etc. take up space, and possibly contain colours which are differently used somewhere else (white roads, white letter bordering, white "signs")
  • Roads/highways are, for a good reason, drawn much wider than they actually are, so the number for roads is much too high
  • The numbers for settlements are far too low (possibly because most of their space on the map is taken up by roads and text)
  • Google maps was drawn by blind monkeys behind their backs

As a conclusion I think it was a good idea at the time, but doesn't stand a closer look, and no matter how much you tweak it - most of those issues can not be resolved. The available input data isn't suited for this purpose, to the output data is far from being accurate or only useful. The severity of the brokenness may vary between graticules, though, so it might still be interesting to look at the data, but it would need a huge disclaimer, and I don't think it should be published as the main part of the "About" section on a graticule page.

--Ekorren 11:12, 5 September 2009 (UTC)

Beginnings of an alternate approach?

I didn't know that this page existed (bad searching on my part) and so I made a tool of my own to answer the question of what proportion of a graticule is water. It's similarly hacky (uses colours to differentiate), but uses OpenStreetMap data and instead of requiring screenshots to be made it calculates which tile images it needs to download and fetches them all. It's at https://gist.github.com/Dan-Q/8c39d49927c4c6de4e944c45f1326171. I think my approach to data aquisition is better but that this implementation might be a smarter way to analyse the data... ...or: maybe I should just bite the bullet and start looking at raw OpenStreetMap outputs. --DanQ (talk) 07:02, 22 August 2018 (UTC)

https://danq.me/2018/08/21/graticule-water-coverage-calculator is a good rundown, thanks. I think this should be noted too: https://wiki.openstreetmap.org/wiki/Landcover -Arlo James Barnes (N35, W105) (talk) 22:11, 23 August 2018 (UTC)