Here’s a statement that no one ever said:
“Integration of a service went smoothly because every detail about every possible current and future requirement is crystal clear upfront, so we could plan ahead, estimate the effort and deliver everything on time. Oh, and let’s not forget that the documentation was great!“
The reality, in the vast majority of the cases, couldn’t be more different than that, though.
All the years of doing software development taught product managers and development teams that applying waterfall for everything just can’t work. Why? Well, here are a couple of quite significant reasons that will apply to every “sophisticated-enough” situation:
- Initially, the only thing that exists is usually an idea, a vision for solving an existing problem, which needs implementing before the competition does it. All the related, lower-level details are yet to unfold, but that is fine.
- We need to validate our idea in practice with a minimal upfront cost and quick return of investment to avoid venturing in dead ends and wasting resources.
- We can’t predict the future nor understand the customer demand upfront. We’ll need real user feedback to steer the feature(s) in the right direction.
Based on the above, it’s clear that software is a living thing and should evolve over time, as requirements and trends change.
Integrating a 3rd party service is no different from starting from scratch in the terms discussed above.
Let’s remind ourselves about what the ancient Greek philosopher thought about the world in general:
“The only constant is change.” – Heraclitus
It’s no different in software engineering today: requirements will change, APIs will change, bugs that need fixing will probably be discovered along the way, and lastly: we might even need to change our service provider.
We have to keep in mind the above all along the way.
The first question we, as software engineers, need to ask ourselves is: do we write our own service or use an already existing provider?
In this article, we’ll only focus on integrating a 3rd party.
Having that said, let’s see what we should be aware of, that might not be obvious at first when we have to choose a geo-mapping service provider.
This is the first level that our chosen service provider must fulfil. Some of the most common features a geo-mapping provider is expected to do is one or more of the following:
Displaying a static map
This is probably the simplest use case, with very limited functionality. It is most commonly used for the contact section of our business. The most important benefit of this over just using a screenshot with a marker on it is that it will keep track of changes in the neighbourhoods, streets, and might even react to the user’s light/dark theme.
Panning and zooming
This is a very obvious feature that anyone could expect from a non-static map. While it would sound obvious, remember that not all service providers support smooth zooming, which might be a dealbreaker.
To understand this limitation, we first need to understand how maps are rendered. There are two major approaches:
- Raster tiles – the map data is comprised of small(er) images
- Vector tiles – the map data is comprised of scalable graphics
Raster tiles are not designed for scalability in terms of dimensions; hence, they are not meant for smooth, continuous zooming.
By definition, a choropleth map is used for presenting colour-coded information for various regions. The feature sounds obvious, but the definition of a “region” is quite a complex thing; more on this later when discussing the so-called boundaries.
POIs / custom markers
Another very obvious and fundamental feature of a geo-mapping service provider. Adding markers is usually an easy task, but adding them dynamically, with custom colour-coding and a custom shape/outline can be a challenge for certain providers. Plan ahead, if we ever need this functionality, let’s ensure that we’ve got the proper support for it from the library.
Navigation, route planning
This is a huge area of application of geo-maps, used by a vast number of industries to optimise time and/or cost of moving vehicles, persons, etc.
This is a very complex feature and when done right, it is accompanied by:
- Real-time traffic data
- Real-time road data (temporary roadblocks, traffic regulation changes, etc.)
Forward and reverse geocoding
Forward geocoding is the process of transforming a human-readable address into a pair of latitude and longitude coordinates.
Reverse geocoding is the opposite of it.
This is, for example, what makes searching a map possible.
Tailoring the look and feel of any application is an important part of the user experience, so offering a good variety of themes and being open to adding more is attractive to end users, as well as for developers.
At the end of the day, our customers are the end-users of our maps, so why not give them the ability to apply custom themes to them in terms of just look and feel or accessibility as well.
Coverage and granularity
Taxonomies and levels
In order to understand this, we’ll need to delve deeper into representational standards and the lack of them. Every country on the planet uses a somewhat different standard for organising and representing their sub-regional entities, with different taxonomy.
Just to get a grasp on the idea, here are some of the taxonomies:
There are sublevels of the taxonomies above, usually denoted by numbers, like administrative 0 – represents the whole country, and administrative 1 represents the sublevel that the country defines for itself.
The shapes of these entities are called boundaries.
Some examples of differences:
|Notation name (if exists)
|Number of boundaries
|Bucharest + the counties
|4-6 digit postcode notation
|5 digit postcode notation
Furthermore, there’s absolutely no guarantee that all the taxonomy and sublevel combinations exist for every country, which makes maintenance and uniform integration even more difficult.
Knowing all this, it is clear that we do not want to maintain all the shape data for the massive combination of the taxonomies, so if we need this data, we should get it from a service provider.
If the amount of combinations wasn’t enough, add world views on top. This is something that the majority of the people tend to forget, as it’s so natural to assume the US world view, which is indeed used by the majority of the countries.
There are some exceptions to this though, because not all the world agrees on where the country borders are.
Depending on where our customers are, presenting a map that conforms to their world view is vital, so let’s keep this in mind when choosing a provider.
The provider’s business model
In general, providers can be classified into two categories: B2B (business to business) or B2C (business to consumer).
For a complex enough application, it’s worth considering a B2B type provider because it offers some key advantages.
As a B2B customer, we’ll probably get dedicated support for production incidents, and/or expert guidance from the provider throughout the whole integration process.
Uptime and maintenance guarantee
B2B type providers guarantee an SLA, and it’s also much “harder” to discontinue the service if they decide to do so. As a B2B customer, this gives us peace of mind that there will be no sudden actions needed to be taken because the service our business depends on will suddenly undergo sunsetting.
The above is not guaranteed for B2C-type providers.
Data source curation
This is, or can be, a sensitive topic. Curation of the data appearing on the map is a vital part of the service provider, because – as with any other dependency – we don’t want any kind of vandalism on our maps.
Sadly, this happens time after time, hence some providers stopped being “open” (in terms of anyone being able to contribute anything to the maps) and established semi-automated processes to ensure the data that gets to them is free of hate speech and vandalism.
This is quite obvious, but needs mentioning – it’s an important factor in choosing a geo-mapping provider. It is entirely up to the provider what pricing models they offer. Some are even free to use.
After we’ve finished the assessment for the above and we know which provider to choose, we will probably get to the implementation phase.
Among all the evergreen principles of programming, let’s remember the “L” from the SOLID principles, coined by Uncle Bob – the Liskov substitution principle.
What this means for us (on a high level) is that we, as developers, must maintain a provider-agnostic interface for our features and hide all the implementation details behind it, so that we don’t leak anything.
You might rightfully ask why, the answers might not be obvious, but there are a couple of reasons.
The most important is to avoid the phenomenon called “vendor lock-in”. Failing to do so will render us being unable to switch the provider due to:
- The service becoming too expensive
- Over time, a new, better service provider appears and we might want to switch
- The provider discontinues the service
Respecting the Liskov principle also opens the door for us to support more than one provider at a time, if we ever want to.
It’s not possible to get past this topic without mentioning the well-known open-source library, Leaflet.
In a nutshell, Leaflet depends on an external tile service to render a map’s contents, and the rest (drawing shapes on the map, adding markers, etc.) is managed by the library, which can already be considered as vendor-agnostic code.
One of the downsides of it can be that it doesn’t support vector layers out of the box (there are plugins for this though).
Furthermore, in cases when our main driver for the geo-mapping solution integration is to be able to render massive amounts of boundaries (which tend to originate from commercial sources which already come with their client-side library), then Leaflet might not be the best route to choose.
A word of caution
In case the service we’ve chosen is billed using a “pay as you go” model:
We must ensure fair usage within our customer base.
Real-time monitoring has to be in place in order to see usage statistics and detect overuse.
On overuse, some actions (preferably automated) should be taken, like throttling the service responses for a limited amount of time for the offending parties.
As a last resort, there should be an emergency lever implemented in our system, that, when pulled, turns off the service in a graceful way, for (preferably) a user or group of users, instead of every single user.
This is the last line of defence for the costs to not go out of control at the end of the billing period.
Ideally, the service provider of our choice should cover everything that we don’t want to implement and maintain, be it the actual source code, or boundary data maintenance.
Planning ahead and designing software and integrations with change in mind is crucial for every living software project to be successful in the long term.
There’s an abundance of geo-mapping service providers available on the market right now, and doing our research, keeping our requirements and limitations in mind will help us in choosing the right tool for the job.