TC Regulate expounds the laws, regulations, policies, best practices, and compliance, with respect to tech and innovation in Africa. Our expert contributor network will help you understand the policies, their impact and how regulators and other stakeholders can capture real value in the tech-age.

The introduction of the first cars at the end of the 19th century saw laws being introduced which may seem slightly out of place to modern observers.

So called ‘red flag laws’ introduced in England required someone to walk 60 yards in front of a moving vehicle waving a red flag or a lantern to warn bystanders of the approaching car (which was limited to travelling at 4mph).

In Pennsylvania, USA an even more bizarre law passed by both houses in the state, mandated that upon a chance encounter with livestock a driver must disassemble their vehicle as quickly as possible and hide all the pieces in the bushes to avoid scaring the animals. This was subsequently vetoed by the state governor.

Unsurprisingly, vehicle regulations have moved on since the 1800s, however, these examples raise some interesting questions about the purpose of regulating technology, whose interests regulation is supposed to protect, and what courses of action regulators can choose to take.

The advent of new technology in any sector often leads to a fundamental change in the way a product or service is delivered. Innovation by its very nature is designed to reduce costs or increase efficiency. The examples of early motorised vehicle regulation above illustrated the gradual adjustment from regulating horse-based transport and all the work and cost involved, to motorised transport.

Fast forward 200 years and we’re more likely to be talking about completely removing the driver from a car allowing for further cost reductions and greater efficiencies elsewhere.

For example, a driverless taxi service may potentially reduce accidents and traffic jams but raise new concerns about where and to whom blame might be assigned for any malfunctions that could endanger passengers, pedestrians and other vehicles and their inhabitants.

Balancing risk and access

A regulator’s role in looking at new technologies revolves around considering how this technology interacts with the status quo. Identifying where the norm is disrupted and what potential for harm this could create. In our driverless car example, this would involve thinking about what changes introducing autonomous vehicles onto the roads would lead to, how these new vehicles would interact with the existing rules of the road, how likely it would be that they would interact with existing road users in the expected way, and the likelihood of disruption.

This leads to further consideration about what the damage would be if something were to go wrong, but also how easily protections against harm could be put in place. This would vary from sector to sector.

Sound cybersecurity measures could provide adequate protections for a payments system such as M-Pesa with small transactions going back and forth. However, Elon Musk unleashing fleets of self-driving cars onto the roads would require considering a far greater range of issues to ensure adequate protections and minimise mayhem on the streets.

Having thought about why technology creates the need to regulate, and what the goal of regulating technology is, the next questions to ask have to be who is regulation designed to protect, and how it should go about achieving this.

The obvious answer as to whom regulation is supposed to protect, is of course first and foremost, the consumer or the public. Regulatory bodies should be established in order to create an institution focused on consumer and public interests, protecting individuals from businesses acting in a purely unfettered, unsupervised and unaccountable way. This would give the public trust in the market and a reasonable expectation of what product or service they should be able to access safely.

When thinking about this goal of protecting the public therefore the issue is largely about the risk of harm. There is a far greater risk of physical harm in regulating medical practice than there is, for example, in the fintech industry.

Moreover, there is a potentially greater risk of harm from a medical app such as Babyl, which is able to issue prescriptions, or Health Connect 24×7 who provide telemedicine than there is, for example, with Medbit which allows users to book appointments with traditional doctors.

Therefore, it is necessary to develop a regulatory framework which recognises these different risks and creates greater barriers to entry and discipline for wrongdoing in cases where the risk is more serious.

So, introducing tech into the regulated sphere should be simple, right? All you need to do is think about protecting the consumer, and think about the greatest risk, right?

Well there are more complications than may first be apparent. First, when technology is involved, who do you regulate?

In a traditional sense this is fairly obvious, you regulate the provider of the product, if the product is law you regulate the lawyer, if the product is a car you regulate the manufacturer.

However, for a digital product where a computer is providing the product, it is less clear where responsibility lies. Should the coder be responsible? Should the CEO of the company providing the product or service be responsible? Should the individual who fed in the data to enable the computer to make the decision be responsible? In the case of AI can anyone be held responsible at all?

The advent of technology raises difficult technical questions about how a piece of software can be regulated. The use of technology generates further questions in assessing the added risk arising from technology. If a piece of tech relies exclusively on user input data, which is not checked by a professional, there is a risk that information may be incorrectly entered or missed.

For example, if someone uses a piece of software to write a will, there is a risk of creating future uncertainty due to the reliance on technology, rather than having information checked by a qualified individual.

Another example would be tech that is reliant on the gig economy, this raises questions over whether gig workers are employees of the company or independent contractors, and what obligations and regulations they have under this, versus the regulation of the platform. In the case of

Uber or the myriad other transport platforms, who should be regulated and how? The company? The drivers? Or both? Who by? How to deal with overlapping regulators and potential conflict of regulations?

This is not to say that regulators should shy away from allowing the use of technology, out of fear of any potential harm.

Regulators must also consider the reverse: by simply shutting the gate and

refusing to regulate any technology at all, there is the potential to harm the consumer in other ways.

Embracing and enhancing the benefits

The first is encouraging the use of yet-to-be regulated technology. Regulators attempting to stop technological development conjures up an image of Canute ordering back the sea.

Technological disruption of some sort have arrived in every industry around the world, and change is only going to get faster. By ignoring this, regulators stand to encourage firms to operate in legally grey areas, or to set up in a way that circumnavigates regulatory requirements.

This ultimately creates a potentially dangerous situation with tech companies operating outside any regulatory jurisdiction. However, by creating a system which allows the development of regulatory standards in conjunction with technological development, regulators stand to better protect the consumer, whilst technology firms gain the certainty of operating within a clear regulatory environment.

Equally by preventing access to technology, particularly in underserved communities, regulators could in fact be doing more harm, by effectively removing access to essential services completely.

Take the example of a tech platform providing access to legal advice, such as Barefoot Law or JusDraft, or a medical tech service like Zipline. In many cases, consumers will have very limited access to these services from a traditional provider, due to factors like geography, cost and so on.

By providing access to these services, even in a slightly limited form, the benefit to consumers is far greater than simply having no access. Regulators must weigh up the benefits created by the development and implementation of tech for consumers, with the potential risks that could be generated by the technology itself.

Clearly technological innovation creates added layers of complexity to the already complex arena of regulation; creating the temptation to turn a blind eye in the hope of avoiding doing the wrong thing. However, taking this approach risks hindering development and excluding individuals from accessing valuable products and services, and at worst actively encouraging the development of unsafe, unregulated products.

Therefore it is important for regulators to actively engage with technology, taking the time to identify where the clear and genuine risks are, and working with innovators to create a regulatory framework where consumers are protected but innovation is not stifled – and hopefully no one has to hide their creation in a bush.

Editor’s Note: This series is brought to you in partnership with the African Law & Tech(ALT) Network; an online community at the intersection of technology and law in Africa. Peter Morton, Research and Projects Associate at the ALT Network authored this first piece of the series.

The Cabal Author

Get the best African tech newsletters in your inbox