I somehow doubt this will pass the snifftest of one of my old addresses, which Australia Post successfully delivered to on a weekly basis:
Third on right of main,
Tiwi College,
Melville Island, 0822, AU.
You can try to normalize that... But "Main Road" is in another city. Because I wasn't living in a city. There were no road names. And the 3rd position was an empty plot, not the third house. We had a bunch of houses around a strip of land, a few minutes from the airstrip - the only egress.
When I was first engaging into web development a year ago, I was making forms that took addresses. From a C and C++ background, I kept asking, what if they lived in a specific country? How can I make my database truly safe? What is the best way to store all these addresses? I immediately gave up on that effort. Very impressive.
Wow, ambitious project. Anybody who has tried to verify addresses can tell you that the staggering number of different formats and conventions around the world make it and almost intractable problem. So many countries have wildly informal standards and people putting down just whatever they want because the mailman "just knows".
Why would one try to "verify" addresses that one knows nothing about?
> because the mailman "just knows"
The mailman does "just know", and the mailman is who the address is for. Web forms I have seen that have tried to "verify" my address have never done so in a way that made the address better for the mailman.
EDIT: I've long thought that web forms should not have separate "street", "street line 2", "number", "apartment", "whatever" fields. Instead they should offer a multi-line input field labeled "this will go straight on the address label, write whatever you like but it's your problem if it doesn't arrive". You'd probably still need separate fields for town/postcode for calculating postage. And of course it wouldn't work because the downstream delivery company would also insist on something it can "verify".
For the US the underlying need for parsing is to determine a definitive location so that taxation, which can vary down to the municipality level, can be computed.
Maxmind is the quintessential example of what devs want to build in their heart of hearts. Low-touch sales but b2b. Almost a monopoly. Prints money for decades. Not a public company so they never increase costs to a usurious amount. Open source never quite meets the level needed
In the same vein, there is also Google's excellent libphonenumber for parsing, formatting, and validating international phone numbers.
And because I had no idea before I worked on a project where we had to deal with customer data: many companies also use commercial services for address and phone number validation and normalization.
I have a real soft spot for these codifications of everyday things. A lot of us do. See also tzdata, GNU units, pluralize(noun), humanize(timestamp), and SPICE astronavigation. And yes, locating Mars in the night sky is indeed an everyday thing!
There are many useful applications of libpostal, and it's an impressive library, but one I would caution against is for the purpose of address matching, at least as the 'primary' approach.
The problem is the hardest to parse addresses are also often the hardest to match, making the problem somewhat circular. I wrote about this more in a recent blog on address matching: https://www.robinlinacre.com/address_matching/
<https://news.ycombinator.com/item?id=18775099> Libpostal: A C library for parsing/normalizing street addresses around the world - 117 points by polm23 on Dec 29, 2018 (25 comments)
<https://news.ycombinator.com/item?id=11173920> Libpostal: international street address parsing in C trained on OpenStreetMap (mapzen.com) 74 points by riordan on Feb 25, 2016 (7 comments)
I used this at a previous company with quite good success.
With relatively minimal effort, I was able to spin up a little standalone container that wrapped around the service and exposed a basic API to parse a raw address string and return it as structured data.
Address parsing is definitely an extremely complex problem space with practically infinite edge cases, but libpostal does just about as well as I could expect it to.
Worth noting that libpostal requires ~2GB RAM when fully loaded due to its comprehensive data models. For containerized deployments, we reduced memory usage by ~70% by compiling with only the specific country models needed for our use case.
I think fundamentally, no parsing/normalizing library can be effective for addresses. A much better approach is to have a search library which finds the address you're looking for within a dataset of all the addresses in the world.
Addresses are fundamentally unstructured data. You can't validate them structurally. It's trivial to create nonexistent addresses which any parsing library will parse just fine. On the flipside, there's enough variety in real addresses that your parser has to be extremely tolerant in what it accepts--so tolerant that it basically tolerates everything. The entire purpose of a parser for addresses is to reject invalid addresses, so if your parser tolerates everything it's pointless.
The only validation that makes any sense is "does this address exist in the real world?". And the way to do that is not parsing, it's by comparing to a dataset of all the addresses in the world.
I haven't evaluated this project enough to understand confidently what they're doing, but I hope they're approaching this as a search engine for address datasets, and not as a parsing/normalizing library.
And keeping such datasets up to date is another matter entirely, because clearly a lot of companies rely datasets that were outdated before their company even existed.
A trivially simple example of just how messy this is when people try to constrain it is that it's nearly random whether or not a given carrier would insist on me giving an incorrect address for my previous place, seemingly because traditionally and prior to 1965 the address was in Surrey, England.
The "postcode area name" for my old house is Croydon, and Croydon has legally been in London since 1965, and was allocated it's own postcode area in 1966. "Surrey" hasn't been correct for addresses in Croydon since then.
But at least one delivery company insisted my old address was invalid unless I changed the town/postcode area to "Surrey", and refused to even attempt a delivery. Never mind they had my house number and postcode, which was sufficient to uniquely identify my house.
Relevant? -> "Falsehoods programmers believe about addresses" (https://www.mjt.me.uk/posts/falsehoods-programmers-believe-a...)
Discussed on HN here: https://news.ycombinator.com/item?id=8907301
I somehow doubt this will pass the snifftest of one of my old addresses, which Australia Post successfully delivered to on a weekly basis:
You can try to normalize that... But "Main Road" is in another city. Because I wasn't living in a city. There were no road names. And the 3rd position was an empty plot, not the third house. We had a bunch of houses around a strip of land, a few minutes from the airstrip - the only egress.You also have to account for interestingly worded addresses. We had "
That's very specific, but also not really an address.When I was first engaging into web development a year ago, I was making forms that took addresses. From a C and C++ background, I kept asking, what if they lived in a specific country? How can I make my database truly safe? What is the best way to store all these addresses? I immediately gave up on that effort. Very impressive.
Wow, ambitious project. Anybody who has tried to verify addresses can tell you that the staggering number of different formats and conventions around the world make it and almost intractable problem. So many countries have wildly informal standards and people putting down just whatever they want because the mailman "just knows".
> Anybody who has tried to verify addresses
Why would one try to "verify" addresses that one knows nothing about?
> because the mailman "just knows"
The mailman does "just know", and the mailman is who the address is for. Web forms I have seen that have tried to "verify" my address have never done so in a way that made the address better for the mailman.
EDIT: I've long thought that web forms should not have separate "street", "street line 2", "number", "apartment", "whatever" fields. Instead they should offer a multi-line input field labeled "this will go straight on the address label, write whatever you like but it's your problem if it doesn't arrive". You'd probably still need separate fields for town/postcode for calculating postage. And of course it wouldn't work because the downstream delivery company would also insist on something it can "verify".
For the US the underlying need for parsing is to determine a definitive location so that taxation, which can vary down to the municipality level, can be computed.
Maxmind is the quintessential example of what devs want to build in their heart of hearts. Low-touch sales but b2b. Almost a monopoly. Prints money for decades. Not a public company so they never increase costs to a usurious amount. Open source never quite meets the level needed
In the same vein, there is also Google's excellent libphonenumber for parsing, formatting, and validating international phone numbers.
And because I had no idea before I worked on a project where we had to deal with customer data: many companies also use commercial services for address and phone number validation and normalization.
I have a real soft spot for these codifications of everyday things. A lot of us do. See also tzdata, GNU units, pluralize(noun), humanize(timestamp), and SPICE astronavigation. And yes, locating Mars in the night sky is indeed an everyday thing!
What are some others?
Libpostal is great and was a lifesaver for me, but anyone who is interested in using it should be aware that it it NOT lightweight.
IIRC it takes gigs of storage space and has significant runtime requirements.
Also, while it's implemented in C there are language binding for most major languages [1].
It's one of those things where it's most likely best deployed as an independent service on a dedicated machine.
[1] https://github.com/openvenues/libpostal?tab=readme-ov-file#b...
There are many useful applications of libpostal, and it's an impressive library, but one I would caution against is for the purpose of address matching, at least as the 'primary' approach.
The problem is the hardest to parse addresses are also often the hardest to match, making the problem somewhat circular. I wrote about this more in a recent blog on address matching: https://www.robinlinacre.com/address_matching/
Previously:
<https://news.ycombinator.com/item?id=18775099> Libpostal: A C library for parsing/normalizing street addresses around the world - 117 points by polm23 on Dec 29, 2018 (25 comments)
<https://news.ycombinator.com/item?id=11173920> Libpostal: international street address parsing in C trained on OpenStreetMap (mapzen.com) 74 points by riordan on Feb 25, 2016 (7 comments)
I used this at a previous company with quite good success.
With relatively minimal effort, I was able to spin up a little standalone container that wrapped around the service and exposed a basic API to parse a raw address string and return it as structured data.
Address parsing is definitely an extremely complex problem space with practically infinite edge cases, but libpostal does just about as well as I could expect it to.
Worth noting that libpostal requires ~2GB RAM when fully loaded due to its comprehensive data models. For containerized deployments, we reduced memory usage by ~70% by compiling with only the specific country models needed for our use case.
Ditto - I was impressed with how well it handled the weird edge cases in our data.
They've managed to create a great working implementation of a very, very small model of a very specific subset of language.
I think fundamentally, no parsing/normalizing library can be effective for addresses. A much better approach is to have a search library which finds the address you're looking for within a dataset of all the addresses in the world.
Addresses are fundamentally unstructured data. You can't validate them structurally. It's trivial to create nonexistent addresses which any parsing library will parse just fine. On the flipside, there's enough variety in real addresses that your parser has to be extremely tolerant in what it accepts--so tolerant that it basically tolerates everything. The entire purpose of a parser for addresses is to reject invalid addresses, so if your parser tolerates everything it's pointless.
The only validation that makes any sense is "does this address exist in the real world?". And the way to do that is not parsing, it's by comparing to a dataset of all the addresses in the world.
I haven't evaluated this project enough to understand confidently what they're doing, but I hope they're approaching this as a search engine for address datasets, and not as a parsing/normalizing library.
And keeping such datasets up to date is another matter entirely, because clearly a lot of companies rely datasets that were outdated before their company even existed.
A trivially simple example of just how messy this is when people try to constrain it is that it's nearly random whether or not a given carrier would insist on me giving an incorrect address for my previous place, seemingly because traditionally and prior to 1965 the address was in Surrey, England.
The "postcode area name" for my old house is Croydon, and Croydon has legally been in London since 1965, and was allocated it's own postcode area in 1966. "Surrey" hasn't been correct for addresses in Croydon since then.
But at least one delivery company insisted my old address was invalid unless I changed the town/postcode area to "Surrey", and refused to even attempt a delivery. Never mind they had my house number and postcode, which was sufficient to uniquely identify my house.