In the green and luscious medieval town of Edinburgh lies the powerhouse of AI advancements: smartR AI. Here the engineers Matthew Malek, Jonah Ramponi and Kornelija Sukyte, under the guidance of CEO and engineer Oliver King-Smith, are the driving soul behind the companies’ inspirations, aspirations and, of course, dedication to finding the optimal AI solutions for every client. Each engineer at smartR AI has had a unique journey to the role they find themselves in now. Be it their educational journey:
“The diverse blend of smartR AI colleagues specializing in physics, cognitive science, mathematics, and computer science gives us an assortment of expertise, alongside those team members who’ve worked on many different projects. All of this gives us a wealth of prior experience to draw from.”
– Jonah Ramponi.
Or the moment they were inspired to pursue a career in AI:
“My initial exposure to AI occurred through DeepMind’s AlphaGo…I aspire to make a significant impact on the world by challenging the limits of what is deemed achievable, akin to the accomplishments of that exceptional engineering team.”
– Matthew Malek
No two experiences can compare.
As I tried to get an understanding of what the engineers attempt to achieve with each project, I asked a question those involved with AI are probably tired of having to answer: were they trying to emulate human intelligence? I was told that the process of coding and engineering products can certainly be inspired by both human intelligence and animal behaviors. This is no better exemplified than in the academic backgrounds of the engineers at smartR AI with Kornelija’s degree in cognitive science and Jonah’s work in optimization using techniques inspired by ant behavior. Yet, when it comes to the objective of smartR AI, Matthew emphasizes that it is to create specialized tools to incrementally improve aspects of our daily lives, not replace specific dimensions of intelligence.
This goal to aid not replace humans reflects a deeper moral conscience within the company towards a human-centred approach to AI. Having asked each engineer about the ethical issues present within their field, the answers I received showed a profound understanding of the individuals whose lives might be affected by the programs being developed today. The importance of strategizing to mitigate job loss, using quality data and ensuring a lack of bias in the AI systems were key issues addressed by the engineers. The sense of responsibility amongst the engineers is strong, and unfortunately, for the moment it seems to be a burden they are facing alone:
“Engineers can help mitigate the often vague guidelines, and the fact that the AI field is moving so fast these days that ethical implications might not always be evident at first by striving for interpretability and explainability of the models.”
– Kornelija Sukyte
And prioritizing interpretability within systems, particularly in the healthcare industry, was also a key consideration:
“Within smartR AI, our commitment revolves around formulating models whose decision-making processes can be comprehended and elucidated. We aim to set a golden standard for ethical AI with every endeavour we undertake, with the hope of leading the global AI community in creating AI that is responsible and mindful.”
– Matthew Malek
Finally, I asked the engineers what they hope AI would achieve for society and, surprisingly, their answers were very consistent: They hope to use AI to improve the quality of life of their fellow human beings.
“There’s a clear opportunity for AI to deliver value—if applied to the right scenarios. At smartR AI we work with organizations to ensure your AI implementation supports stronger human connections rather than replacing them. Maintaining the human element in every AI project is a primary focus for our team. After all it’s people that make a company.”
Oliver King-Smith, CEO and Founder
Written by Celene Sandiford, smartR AI