Scotsoft 2018. Smart people, community and trees

Last week I was proud to continue the tradition of Sopra Steria’s support of the Young Software Engineer of the year award, since its inception 20 years ago.  Once again the entrants were outstanding (though I confess the technicalities of some project went right over my head!).   Can Gafuroglu’s winning  project was entitled  ‘Joint prediction and Classification of Brain Image Evolution Trajectories from Baseline with Application to Early Dementia Diagnosis’.  Our industry is about solving problems and this project underlines the significance of what can be achieved by the smart use of technology by #smartpeople.

 

The buzz at the dinner was incredible and underlined the spirit of ScotlandIS – that of #community.   Our Sopra Steria table was no exception, with a mix of SMEs, customers and advisors. Plus Alison McLaughlin – now on secondment to Scottish Government Digital as part of the Digital Fellowship Programme.

And, #trees.  Lizzy Yarnold was an inspirational speaker on the evening and reminded us all of the importance of belief, ambition and team work.  She spoke about a book “The Inner Life of Trees”: What they feel, how they communicate.  A brilliant parallel to business life – the need for constant communication, mutual support and networking.

Well done to ScotlandIS.  The Scotsoft conference has once again reinforced our Smart Young People, Our Community and that we are a well-connected forest.


by Mags Moore, Head of Government for Scotland and Northern Ireland.

Journey of BB8 (Part 1)

We all have dreamt of flying, fighting with a lightsabre, and controlling objects with our mind. I was lucky enough to make one of my dreams come true when DigiLab UK went on an exploration journey of brain-computer interfaces. I recruited one fellow dreamer, a UX designer, along with me, the software engineer. We started to look at different aspects of BCI.  The initial task chosen was to control an object with our mind, and in the journey, learn more about the technology. I was staring at my desk thinking about which object to control. Then there was my answer staring back at me, BB8 on my desk.   Whether by fate or the force, we knew what we had to do. We would control BB8 using a BCI device, the Emotiv EPOC+, which was also available and previously used for hackathon project in Norway. I will take you through my journey of making this prototype with the help of a two-part series blog in the hopes of helping others who are starting to explore BCI technology.

Setup

The Emotiv EPOC+ headset comes along with 14 electrodes.  Setup of the device is easy but tedious as you are required to soak the electrodes with saline solution each time before screwing them onto the device. This process is needed to get good connectivity between the user’s scalp and electrodes. For people with more hair, it is naturally more difficult to get good connectivity as they must adjust their hair to make sure there is nothing bet­ween the electrodes and scalp. For some connectivity levels were sufficient with dry electrodes but to save time I recommend that always soak the electrodes before using the device as you are more likely to get fast and good connectivity.  There are many videos available online that guide you through the initial setup of the device.

20180913_114255
Electrodes need to be screwed on the device

 

 

20180913_113314
Emotiv EPOC+ with fourteen electrodes and the EEG head device

Training mental commands

I aimed to control BB8 with EPOC+ headset, so I started to investigate the mental commands and its various functionalities. To use the mental commands you first need to train them. The training process enables the EPOC+ to analyze individual brainwaves and develop a personalized signature corresponding to the different mental action.

Emotiv Xavier

Emotiv Xavier control panel is an application that configures and demonstrates the Emotiv detection suites. It provides the user with an interface to train mental commands, view facial expressions, performance metric, raw data, and to upload data to Emotiv account. The user has the option to sign in to their account or use the application as a guest.

The user is required to make a training profile. Users have the option to have multiple training profiles under one Emotiv account. Each user needs their profile as each one of us possesses unique brain waves.

Let’s train the commands

The first mental command or action user must record is their “neutral” state.  The neutral state is like a baseline or passive mental command. While recording this state, it is advisable to remain relaxed like when you are reading or watching TV. If the neutral state has not recorded correctly, the user will not be able to get any other mental commands working properly. For some recording, the neutral state results in better detection of other mental commands.

The “record neutral”button allows the user to record up to 30 seconds of neutral training data.  The recording automatically finishes after 30 seconds, but the user has the option to stop recording any time they feel that enough data has been collected. At least 6 seconds of recorded data is required to update the signature.

After recording the neutral state, the user can start to train any one of the 13 different actions available. For my research, I only focused on two mental actions “push” and “pull.” Emotiv website provides tips and instruction on how to train the mental commands. It suggests remaining consistent in thoughts while training. To perform any mental action, users must replicate their exact thoughts process or mental state that they had during the training process.  For example, if a user wants to train “push” command, it’s up to the user what they want to think or visualized for that action. Some users might imagine a cube going away from them, or some might imagine a cube shrinking, whatever works for them, but they need to remain consistent in their thoughts and mental state. If the user is distracted even for a second, it is advisable to retrain the action. As the user is able to train a distinct and reproducible mental state for each action, the detection of these actions become more precise. Mostly, the users must train an action several times before getting accurate results.

While I was trying to train the “push” action, I placed the BB8 on a white table and imagined it moving away from me. I replicated same thought, imagining BB8 going away from me on the table and was able to perform the mental action. However, when I placed the BB8 on the carpet, I failed. This may have been because the different colour of the carpet distracted me and I was unable to replicate my exact mental state, therefore, failed to perform the mental action. For me, the environment needed to be the same to reproduce my specific mental state. However, this varies from user to user.

Emotiv Xavier gives the option to view an animated 3D cube on the screen while training an action. Some users find it easier to maintain the necessary focus and consistency if the cube is automatically animated to perform the intended action as a visualization aid during the training process. A user can, therefore view themselves performing an action by viewing the cube. The cube remains stationary unless the user is performing one of the mental actions (if already trained) or unless the user selects “Animate model according to training action checkbox for training purposes. It is advisable to train one action fully before moving on to the next one. It gets harder and harder to train as you add more mental actions.

Is the training process easy?

There are lots of tips and guidance given on Emotiv website for training mental commands. Users are given an interface to help them train and perform mental actions with the aid of animated 3D or 2D models. However, during my three days of training, I was not able to find an easy and generic way to train the mental commands. People are different. Some are more focused than others. Some like to close their eyes to visualize and perform the command. Some want help with animation. What I observed was that it depends on the person and how focused they are, and how readily they can replicate a state of mind. There is no straightforward equation. You need time and patience. I was only able to achieve 15 % skill rating after training two mental actions. Only one of my colleagues got 70% skill rating which he wasn’t able to reproduce later.

NeuroFeedback

While searching for simpler ways to train mental commands I came across a process known as neurofeedback. Neurofeedback is a procedure for observing your brain activity to understand and train your brain. A user observes what their brain is actually doing as compared to what they want it to be doing. The user monitors their brain waves, and if they are nearing the desired mental state, then they are rewarded with a positive response which can be music, video or advancing in a game. Neurofeedback is used to help reduce stress, anxiety, aid in sleeping, and for other forms of therapeutic assistance.

Neurofeedback is a great way to train your brain for mental commands. For example, if someone is trying to do “push” command,” they can observe their brain activities on screen and see if they are consistent. Then they can slowly and steadily train their brain to replicate a specific state. Emotiv provides the “Emotive 3D Brain Activity Map” and “Emotiv Brain Activity Map”, a paid application that can be used to monitor, visualize and adjust brainwaves in real time.  For our research, we didn’t try these applications.  If you try it out, let us know how you got on!

Training is like developing a new skill. Remember how you learned to ride a bike, or how you learned to drive? It took time and practice, and it’s the same for training mental commands. Companies do provide help by giving tips, instruction and software applications to help users train and visualize, but in the end, it’s acquiring a new skill, and users need practice. Some might learn faster than others, but for everyone it takes time.

 

 

3 tips for accelerating digital transformation in telecoms

The telecoms industry is no stranger to change. After all, the leading players in this sector have delivered network connectivity and devices that sit behind many of the world’s game-changing digital innovations. Take digital pioneer Uber as an example; it simply wouldn’t exist without the proliferation of smartphones and underpinning mobile network engines. But there’s a problem for the telecoms companies providing the networks and devices enabling these new business models. These telco giants need to accelerate their own digital transformations but, unlike digital start-ups, they have made massive investments in legacy IT over the past few decades and can’t simply ‘switch on digital’.

Nonetheless, business leaders recognise that, as consumers increasingly demand a digital customer experience, one that offers instant gratification, they must embrace the digital economy. Failure to become a truly digital company, is not an option. You only have to look at the number of big name companies that have gone out of business in recent years because they couldn’t, or wouldn’t, transform.

So, how can traditional telecoms companies survive in today’s fast-changing digital economy? Not known for their agility, how do they forge ahead quickly with digital transformation programmes that ensure their business models and operations are fit for the future? There are many recommendations for accelerating transformation, embracing technical, operational and process change, but I’m going to focus on just three in this blog.

Tip 1 – Modernise legacy applications, rather than dispose of them

At Sopra Steria, we encourage those clients with a heavy investment in legacy assets to modernise what they’ve invested in over the years, while ensuring they also keep pace with modern, cloud-based developments. It’s clearly not feasible to replace decades-old systems and applications in their entirety. That’s especially so in an industry experiencing significant pressure on revenues and margins (e.g. decreased roaming revenues, commoditisation and price erosion) and needing to continue investing in their networks (SDN/NFV & 5G, etc). So, my recommendation is to adopt an evolutionary approach. Ask what you need to do to extract more value from existing IT assets in line with a digital strategy. Then look at the real business triggers for legacy applications to become redundant or the option to replace with new, cloud-native ones. Be selective in your investments and opt for projects that give a rapid ROI. Modernisation offers a quick win as you accelerate your digital transformation.

Top 2 – Use Agile coaches to turn DevOps from theory into reality

We all know that speed to market with new services and products that give customers the digital experience they’re looking for is vital. To achieve this, organisations recognise that they need to transform their software development processes. Traditional lengthy waterfall-style development must be replaced with a DevOps culture that enables rapid, frequent releases through Agile sprints. This is typically a strategic top-down decision that sounds good in theory. The message is clear: we need to release fast, often and with assured quality; and we need to be agile so that we can respond quickly as the market changes. Yet that message becomes lost as it filters down through the management layers and those people expected to put theory into practice struggle to make it happen. I’ve seen enterprises overcome this by embedding Agile coaches at different layers of the organisation. These are people with practical experience of DevOps and Agile, able to lead and demonstrate this new way of working. This is a case of ‘don’t just tell us how to do it, show us as well’.

Tip 3 – Address adoption challenges with a defined vision and value position

Even with Agile coaches embedded in the end-to-end DevOps cycle, we still see instances where an organisation has implemented a new system or launched an innovative app that fails to gain traction with users. Let’s say, for example, you want to launch a mobile-front end on your Oracle DB system, enabling your employees to access what they need, where and when they need it. Or you might have invested millions in a new cloud platform for better visibility and control. If you want to avoid this being money down the drain, you must encourage user adoption. This requires communication of the ‘vision for’ and ‘value of’ your investment. So, it’s not just a case of communicating what the new capability is for (the vision), but clearly articulating the benefit it will bring both to the business and the users themselves (the value). If it’s a sales application, why would your salespeople use it if they perceive that’s it’s just a management tool for tracking what they do? How much more enthusiastic would they be if they understood how it will help them to sell more, faster? It sounds a simple tip for ensuring successful adoption of new digital tools, but the lack of a defined vision and value proposition can so easily stand in the way of you achieving your desired business outcomes.

Get in touch

The above three tips are just a flavour of the new thinking and approaches that telcos must take on board to survive in today’s digital economy by accelerating their digital transformations.

To find out more please contact me – jason.butcher@soprasteria.com 

Bridging the gap: how Fintechs and ‘big business’ can work together

by Colin Carmichael, UK Fintech Director

Everyone’s talking about Fintechs – but what does ‘Fintech’ really mean?  It’s a generic term that loosely groups a number of innovative technical organisations within Financial Services.

As the Fintech director for Sopra Steria, I believe I know all about Fintech. To me, Fintech is all about change – introducing new, fresh ideas and ways of working – and making them happen. I’ve worked in financial services across the UK, Europe and further afield for many years – and organisations of all sizes find it hard to change; the bigger the organisation – the greater the challenge. Change means that organisations have to think and act differently to introduce brand new ways of working to deliver desirable services to their customers.  The customer really is king and new products and services need to be built to their wishes (rather than the ‘old fashioned’ way of creating a product and selling it hard). What’s more, new, faster technology and access to huge amounts of data have made this issue more acute as it’s raised customer expectations. Put simply – there’s so much to think about and to do to get ahead and stay ahead.

Organisations need to keep up with the very latest ideas – and still deliver a reliable and robust service. And it’s a fact that incorporating new technology is how they will do it. So why is it so challenging for Fintechs and big players to work together? All too often, Fintechs struggle to get their ideas to the right decision makers – and established businesses are nervous of too much change.

The biggest hurdles are often company politics, internal structures, old processes and course – the difficulty of incorporating brand new ideas into ‘old’ systems. For Fintech’s, it’s tricky to get the right contacts at the right level – and to also ensure their ideas are brought to life safely and securely.  For banks and insurers, introducing new, untried and tested ideas is hugely risky and it can take a long time – as well as effort and money to get it right.

What’s needed is a bridge between the Fintechs and the more traditional organisations – to help them to work productively together. Organisations like Sopra Steria have platforms that are at the heart of many of today’s large businesses – and they also understand existing processes, procurement and politics which often stand in the way of getting things done. By working together, Fintechs, established players and platform organisations can listen to and learn from each other, in order to fast track innovation and get the results they need – quickly and cost effectively.

So, my advice to banks and insurance companies as well as the Fintechs is to work and collaborate with a platform provider from the start. Fintechs can safely test and prove their worth in ‘virtual factories’ using real systems and data – and financial organisations can be confident about bringing the best and brightest ideas to market without huge risk. It puts new Fintechs in touch with established players – and accelerates change. And that’s what we all want.

So, maybe, we shouldn’t be using the term ‘fintech’ to refer to just new and upcoming technology companies. After all – aren’t we all Fintechs? Perhaps instead we should be focusing on partnerships and collaborations between new technology companies, established organisations and the role platform players have to accelerate change.

It really is true. It’s not what you know but who you know that makes all the difference.

Quantum Computers: A Beginner’s Guide

What they are, what they do, and what they mean for you

What if you could make a computer powerful enough to process all the information in the universe?

This might seem like something torn straight from fiction, and up until recently, it was. However with the arrival of quantum computing, we are about to make it reality. Recent breakthroughs by Intel and Google have catapulted the technology into the news. We now have lab prototypes, Silicon Valley start-ups and a multi-billion dollar research industry. Hype is on the rise, and we are seemingly on the cusp of a quantum revolution so powerful that it will completely transform our world.

On the back of this sensationalism trails confusion. What exactly are these machines and how do they work? And, most importantly, how will they change the world in which we live?

6

At the most basic level, the difference between a standard computer and a quantum computer boils down to one thing: information storage. Information on standard computers is represented as bits– values of either 0 or 1, and these provide operational instructions for the computer.

This differs on quantum computers, as they store information on a physical level so microscopic that the normal laws of nature no longer apply. At this minuscule level, the laws of quantum mechanics take over and particles begin to behave in bizarre and unpredictable ways. As a result, these devices have an entirely different system of storing information: qubits, or rather, quantum bits.

Unlike the standard computer’s bit, which can have the value of either 0 or 1, a qubit can have the value of 0, 1 or both 0 and 1 at the same time. It can do this because of one of the fundamental (and most baffling) principles of quantum mechanics- quantum superposition, which is the idea that one particle can exist in multiple states at the same time. Put another way: imagine flipping a coin. In the world as we know it (and therefore the world of standard computing), you can only have one of two results: heads or tails. In the quantum world, the result can be heads and tails.

What does all of this this mean in practice? In short, the answer is speed. Because qubits can exist in multiple states at the same time, they are capable of running multiple calculations simultaneously. For example, a 1 qubit computer can conduct 2 calculations at the same time, a 2 qubit computer can conduct 4, and a 3 qubit computer can conduct 8- increasing exponentially. Operating under these rules, quantum computers bypass the “one-at-a-time” sequence of calculation that a classical computer is bound by. In the process, they become the ultimate multi-taskers.

To give you a taste of what that kind speed might look like in real terms, we can look back to 2015, when Google and Nasa partnered up to test an early prototype of a quantum computer called D-Wave 2X. Taking on a complex optimisation problem, D-Wave was able to work at a rate roughly 100 million times faster than a single core classical computer and produced a solution in seconds. Given the same problem, a standard laptop would have taken 10,000 years.

7

Given their potential for speed, it is easy to imagine a staggering range of possibilities and use cases for these machines. The current reality is slightly less glamorous. It is inaccurate to think of quantum computers as simply being “better” versions of classical computers. They won’t simply speed up any task run through them (although they may do that in some instances). They are, in fact, only suited to solving highly specific problems in certain contexts- but there’s still a lot to be excited about.

One possibility that has attracted a lot of fanfare lies in the field of medicine. Last year, IBM made headlines when they used their quantum computer to successfully simulate the molecular structure of beryllium hydride, the most complex molecule ever simulated on a quantum machine. This is a field of research which classical computers usually have extreme difficulty with, and even supercomputers struggle to cope with the vast range of atomic (and sometimes quantum) complexities presented by complex molecular structures. Quantum computers, on the other hand, are able to read and predict the behaviour of such molecules with ease, even at a minuscule level. This ability is significant not just in an academic context; it is precisely this process of simulating molecules that is currently used to produce new drugs and treatments for disease. Harnessing the power of quantum computing for this kind of research could lead to a revolution in the development of new medicines.

But while quantum computers might set in motion a new wave of scientific innovation, they may also give rise to significant challenges. One such potentially hazardous use case is the quantum computer’s ability to factorise extremely large numbers. While this might seem relatively harmless at first sight, it is already stirring up anxieties in banks and governments around the world. Modern day cryptography, which ensures the security of the majority of data worldwide, relies on complex mathematical problems- tied to factorisation- that classical computers have insufficient power to solve. Such problems, however, are no match for quantum computers, and the arrival of these machines could render modern methods of cryptography meaningless, leaving everything from our passwords and bank details to even state secrets extremely vulnerable, able to be hacked, stolen or misused in the blink of an eye.

8

Despite the rapid progress that has been made over the last few years, an extensive list of obstacles still remain, with hardware right at the top. Quantum computers are extremely delicate machines, and a highly specialised environment is required to produce the quantum state that gives qubits their special properties. For example, they must be cooled to near absolute zero (roughly the temperature of outer space) and are extremely sensitive to any kind of interference from electricity or temperature. As a result, today’s machines are highly unstable, and often only maintain their quantum states for just a few milliseconds before collapsing back into normality- hardly practical for regular use.

Alongside these hardware challenges marches an additional problem: a software deficit. Like a classical computer, quantum computers need software to function. However, this software has proved extremely challenging to create. We currently have very few effective algorithms for quantum computers, and without the right algorithms, they are essentially useless- like having a Mac without a power button or keyboard. There are some strides being made in this area (QuSoft, for example) but we would need to see vast advances in this field before widespread adoption becomes plausible. In other words, don’t expect to start “quoogling” any time soon.

So despite all the hype that has recently surrounded quantum computers, the reality is that now (and for the foreseeable future) they are nothing more than expensive corporate toys: glossy, futuristic and fascinating, but with limited practical applications and a hefty price tag attached. Is the quantum revolution just around the corner? Probably not. Does that mean you should forget about them? Absolutely not.

Containers: Power & Scale

by Richard Hands, Technical Architect

In my last blog post, we looked at the background of Containers. In this piece, we will explore what they can do and their power to deliver modern microservices.

What can they do?

Think of containers on a ship.  This is the most readily used visual analogy for containers. A large quantity of containers, all holding potentially different things, but all sitting nice and stable on a single infrastructure platform, gives a great mental picture to springboard from.

Containers are to Virtual Machines, what Virtual Machines were to straight physical hardware.  They are a new layer of abstraction, which allows us to get more ‘bang for our buck’.  In the beginning, we had dedicated hardware, which performed its job well, but in order to scale your solution you had to buy more hardware. This was difficult and expensive. Along came Virtual Machines, which allowed us to utilise much more commoditised hardware, and scale up within that, by adding more instances of a VM, but again, this still came with quite a cost.

To spin up a new VM, you have to ensure that you have enough remaining hardware on the VM servers. If you are using subscription or licensed operating systems, you have to consider that etc.  Now along comes containers. These containers literally contain only the pieces of code, and libraries necessary, to run their particular application. They rely on the underlying Infrastructure of the machine they are running on (be it physical or virtual).  We can typically run 10-20x more containers PER HOST than if we were to try putting the same application directly on the VM, and scale up by scaling the number of VM’s.

Orchestration for power

Containers help us solve the problems of today in far more bite-sized chunks than ever before.  They lend themselves perfectly to microservices.  Being able to write a microservice, and then build a container that holds just that microservice and its supporting architecture, be it spring boot, wildfly swarm, vertex, etc., gives us an immense amount of flexibility for development.  The problem comes when you want to orchestrate all of the microservices into a cohesive application, and add in scalability, service reliability, and all of the other pieces that a business requires to run successfully.  Trying to do all of this by hand would be an incomprehensible challenge.

There is a solution however, and it comes in the form of Kubernetes.

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.” (http://kubernetes.io)

Kubernetes gives us a container run environment that allows us to declaratively, rather than imperatively define our run requirements for our application.  Again let’s look back to our older physical or VM models for the imperative definition:

“I need to run my application on that server.”

“I need a new server to run my application on, and it must have x memory and y disk”

This approach always requires justifications, and far more thought around HA considerations such as failover, as we are specifying what we want our application to run on.

Most modern applications, being stateless by design, and certainly containers, don’t generally require that level of detail of the hardware that they are running on. They simply don’t care as they’re designed to be small discrete components which work together with others.  The declarations look more like:

“I want 10 copies of this container running to ensure that I’ve got sufficient load coverage, and I don’t want more than 2 down at any one time.”

“I want 10 copies of this container running, but I want a capability to increase that if cpu or memory usage exceeds x% for y% time, and then return to 10 once load has fallen back below z

These declarations are far more about the level of application service that we want to provide, than about hardware, which in a modern commoditised market, is how things should be.

Kubernetes is the engine, which provides this facility but also so much more. For example with Kubernetes we can declare that we want x and y helper processes co-located with our application, so that we are building composition whilst preserving one application per container.

Auto scaling, load balancing, health checks, replication, storage systems, updates, all of these things can be managed for our container run environment by Kubernetes.  Overall, it is a product that requires far more in depth reading than I can provide in a simple blog post, so I shall let you go and read at http://kubernetes.io

Last thoughts

To conclude, it is evident that containers have already changed the shape of the IT world, and will continue to do so at an exponential pace.  With public, hybrid, and private cloud computing becoming ‘the norm’ for both organisations, and even governments, containers will be the shift which helps us break down the barriers from traditional application development into a true microservices world. Container run systems will help us to break down the old school walls of hardware requirements, thus freeing development to provide true business benefit.

Follow Richard Hands on Twitter to keep up to date with his latest thoughts.

How the Equality Act 2010 affects you

Most of us use online services such as banking, travel and social media everyday with little thought as to how we can access or use them. However, this isn’t the case for many users, including employees.

The Disability Discrimination Act 1995 legislation, which previously provided protection against direct discrimination, has been updated to the Equality Act 2010 (except Northern Ireland). The Equality Act became legal on 6 April 2011, and changes the law to brings disability, sex, race, and other types of discrimination under one piece of legislation.

One major change is that the Equality Act 2010 now includes perceived disability and in-direct discrimination, making it easier for claimants to bring successful legal proceeding against businesses and public bodies.

What it means

The Equality Act essentially means that all public bodies or businesses providing goods, facilities or services to members of the public, including employees (For example: retail, HR, and councils) must make fair and reasonable adjustments to ensure services are accessible and do not indirectly discriminate. Being fair and reasonable means taking positive steps to ensure that disabled people can access online services. This goes beyond simply avoiding discrimination. It requires service providers to anticipate the needs of disabled customers.

Benefits of compliance

UK retailers are missing out on an estimated £11.75 billion a year in potential online sales because their websites fail to consider the needs of people with disabilities (Click-Away Pound Survey 2016).

In addition, 71% (4.3 million) of disabled online users will simply abandon websites they find difficult to use. Though representing a collective purchasing power of around 10% of the total UK online spend, most businesses are completely unaware they’re losing income, as only 7% of disabled customers experiencing problems contact the business.

How to comply with the Equality Act

The best way to satisfy the legal requirement is to have your website tested by disabled users. This should ideally be undertaken by a group of users with different disabilities, such as motor and cognitive disabilities, and forms of visual impairment. Evidence of successful tests by disabled users could be invaluable in the event of any legal challenge over your website’s accessibility.

The World Wide Web Consortium (W3C), is the international organisation concerned with providing standards for the web, and publishes the Web Content Accessibility Guidelines 2.0 (WCAG 2.0), which are a good indicator of what standard the courts would reasonably expect service providers to follow to ensure that their websites are accessible.

WCAG provides three ‘conformance levels’. These are known as Levels A, AA and AAA. Each level has a series of checkpoints for accessibility – known as Priority 1, 2 and 3 checkpoints. Public bodies such as the government adhere to Priority 2 – Level AA accessibility as standard.

According to these standards, websites must satisfy Priority 1 – Level A, satisfying this checkpoint is a basic requirement and very easy to implement. Priority 2 – Level AA, satisfying this checkpoint will remove significant barriers for customers. Finally, Priority 3 – Level AAA, is the highest level of accessibility and will ensure most disabled customers can access services, and requires specific measures to be implemented.

Read the Equality act 2010 quick start guides to find out more about how this affects you.