Sunday, July 21, 2019

Loyalty & a fragmented transport network won't work!

I recently read an article on Wired.com about how USA-based transit providers are offering loyalty rewards to encourage usage of their services.

In my opinion royalty rewards are offered for only two purposes:
1. In exchange for customer data (e.g. give us your email for 10% off first purchase)
2. To change user behaviour (e.g. get 20% off rail fares for travelling off-peak)

However, this article got me thinking about how a multi-modal (rail, bus, subway, ferry, etc.)  transport network could never hope to implement an effective loyalty scheme if it only had one mode... in other words in a fragmented area, such as Scotland, could this work?

So here's my article on Linkedin explaining this in more detail:
https://www.linkedin.com/pulse/supporting-loyalty-rewards-fragmented-transport-hayden-sutherland/

In reality... of course I know it won't work unless:

  • All operators share the same technology/software/account application (and I know how unlikely that would be in a privatised industry)
  • There are ways for all these accounts to integrate together, so that an aggregated view of all accounts is possible in one place.
    And I think it is for this purpose that an Open Transport API should work towards.

Friday, July 19, 2019

Taking Mobility-as-a-Service Seriously

This week I have made it public knowledge the Ideal Interface has joined MaaS Scotland.
https://blog.idealinterface.co.uk/2019/07/19/ideal-interface-joins-maas-scotland/

Although technically we joined just before I attended the MaaS Scotland Annual Conference in Edinburgh on June 20th... this announcement is part of my growing aim to get more involved in the ongoing development and industry acceptance of an Open Transport API for account interoperability.


Wednesday, July 17, 2019

LIFESPAR cloud architecture principles to follow

With more and more organisations adopting cloud technologies for their applications, I've seen the tendency to just "life and shift" architectures. Physical servers are replicated as virtual machines and the same software applications as before are run "on somebody else's computer". 
  • Latency-aware
  • Instrumented
  • Failure-aware
  • Event-driven
  • Secure
  • Parallelizable
  • Automated
  • Resource-consumption-aware

But this approach doesn't leverage many of the benefits of software-as-a-service (SaaS) or the new cloud-only components available as platform-as-a-service (PaaS). It also means that architects creating cloud-native solutions need to have a different set of principle to before.
Some of the best guidance I have seen in this area comes from Gartner, who use the acronym LIFESPAR to explain those principles to follow for designing cloud-native architectures.

So what is LIFESPAR and what does it mean to an architect?

Latency-aware:
Understand that every application needs to be designed & implemented knowing that it may not get an instant response to each request. This latency could be milliseconds, or it could be seconds, so ensure each solution elegantly deals with delays and is tested to prove it works under these conditions.

Instrumented:
Every solution and every component must generate sufficient data about their usage and ideally send this information back to a central location, so that real-time & subsequent decisions about the architecture, cost, volumes, etc. can be made. In this way, as instrumented approach supports an elastic automatically scaling system (scaling both up AND down).

Failure-aware:
Remember that things fail (hardware, processes, etc.) and that software, created by humans, is never usually bug-free. So always design solutions that wait or fail & re- in the way you need them to. Failures must also be comprehensively tested - if necessary, writing code to force failures (e.g. the Chaos Monkey). Also bear in mind that in some scenarios... latency and failure are the same thing.

Event-driven:
Applications used to developed with synchronous actions (as the performance, etc. of the target system was known). But now in a decoupled architecture, messages need be sent as events. This also simplifies scaling and makes them more resilient to failure.

Secure:
Always assume your solution will be subject to some sort of malicious activity and try to prevent it. This means restrict events and users, minimising attack surfaces and following best practice data handing and security processes.

Parallelizable:
Many small systems are usually cheaper that one large one, even in the cloud. Therefore find ways to scale-out your solution and its processing & messaging, rather than scaling-up.

Automated:
Every cloud-based component and also your overall solutions should be able to be deployed, started, stopped & reset via scripts. Remember to test this from a command line, from the beginning of development through to your Operational Acceptance testing (and even as a means of testing Disaster Recovery processes).

Resource-consumption-aware:
So, you now have almost limitless processing and storage resources at your fingertips. But you also either have your credit card charged as you use the service or are going to get an invoice for what you use very soon... Therefore, always consider using the least amount of cloud resources possible. Simplify your solution, build & burn of components & environments, automate components to start & stop only when they are needed and don't store more than you need (e.g. by sharing test data across solutions & environments).