Switch language / تبديل اللغة

AI Leadership Insights

What is the future of intelligence

With AI dominating conversations everywhere, there’s no shortage of opinions, predictions, and noise. What’s missing, more often than not, is a clear sense of direction. I recently listened to Prof. Hannah Fry’s interview with the legendary Demis Hassabis, and it struck me as one of the more thoughtful discussions on where intelligence (artificial and otherwise) may actually be going.
For those who don't know him, Demis Hassabis is a British AI researcher, entrepreneur, and neuroscientist best known as the co‑founder and CEO of Google DeepMind and for leading breakthroughs like AlphaGo and AlphaFold, which have reshaped both AI and biology. His prominence comes from combining deep technical contributions with high‑impact real‑world applications, culminating in a 2024 Nobel Prize in Chemistry for AI‑driven protein structure prediction.

Google DeepMind has long been a major force in AI research. Much of the work they began 10–15 years ago laid the groundwork for many later advances, including transformer models that were eventually commercialized by organizations such as OpenAI. What differentiates Google DeepMind in my view, is their focus on producing world-class research and science, rather than having a purely commercial focus. Hassabis’ expertise in neuroscience clearly shapes this direction: drawing ideas from how humans think, learn, and plan, and applying them to machine learning. This cognitive path toward AI is very deliberate.

Hassabis often describes DeepMind’s mission as “solving intelligence,” then applying it to hard problems in science and society. He has also been vocal on AI safety and governance, contributing to global policy discussions like the UK AI Safety Summit, while still driving large systems used across research and industry.

 

What is the future of intelligence?

After listening to Hassabis reflect on intelligence, one thing became clear: progress toward AGI is not a straight line, and it’s not just about building bigger models. The reality is messier, and more interesting!

1. A decade of progress in a year exposed real weaknesses

Multimodal systems have advanced at remarkable speed. At the same time, this progress revealed an uncomfortable truth: models that can solve Olympiad-level problems may still fail at basic reasoning. Instead of smoothing intelligence, recent gains made its uneven nature obvious.

2. The main obstacle is consistency, not raw ability

Hassabis describes today’s systems as “jagged intelligences” — impressive at certain peaks, unreliable in the gaps. Until AI can reason steadily across domains and recognize when it doesn’t know something, general intelligence remains out of reach.

3. Bigger models alone are not enough

DeepMind is placing its bets evenly: half on scaling compute and data, half on new system designs. Scale helps, but progress also depends on better reasoning, handling uncertainty, and learning over longer time horizons.

4. Language does not equal understanding

Some parts of intelligence can’t be learned from text alone. Physical intuition, spatial reasoning, and interaction with the world require experience. This is why world models and simulation are becoming central to current research.

5. Simulation may teach us why intelligence exists at all

One of the most striking ideas discussed was the use of large-scale simulations to study how intelligence, social behavior, and even consciousness might arise. Running millions of controlled experiments could help explain not just how intelligence works, but why it emerged.

6. AI may be overstated now, and still underestimated later

Hassabis holds two views at once: parts of today’s AI ecosystem are clearly inflated, yet the deeper, long-term effects (especially in science and energy) are still widely misunderstood. The biggest changes may arrive later, but cut much deeper.

 

Key New Announcements/Concepts:

1. Deepened CFS Partnership

  • Hassabis revealed that the collaboration with Commonwealth Fusion Systems is now much deeper than previously understood.

  • The work goes beyond advisory roles into plasma containment and advanced materials, positioning fusion research as a real testbed for AI-driven scientific discovery.

 

2. Genie ↔ SIMA Infinite Training Loop

  • For the first time, Hassabis publicly described how Genie (world models) and SIMA (embodied agents) are intended to form an infinite self-improving loop.

  • World models generate environments → agents act within them → outcomes refine the world model → repeat.

  • This frames embodied learning as central, not auxiliary, to AGI progress.

 

3. Physics Benchmarking via Game Engines

  • A genuinely novel methodological detail: DeepMind is developing A-level physics benchmarks inside game engines.

  • The goal is to test whether models actually respect Newtonian laws, not just predict outcomes statistically.

  • This signals a shift from language-centric evaluation to grounded physical correctness.

 

4. Whole-Statement Confidence Scoring

  • Hassabis outlined a concrete path to addressing model reliability: Confidence is assessed across thinking steps and planning, validating entire statements, not token-by-token probabilities.

  • This is an important evolution toward trustable reasoning systems rather than fluent text generators.

 

5. “Jagged Intelligence” as a First-Class Concept

  • He explicitly used the term “jagged intelligence” to describe how current systems excel in some areas while failing badly in others.

  • This terminology formalizes a widely felt but rarely named limitation of state-of-the-art models.

 

6. World Models Re-emphasized as Core Obsession

  • While not new in isolation, Hassabis reinforced that world models remain his longest-standing passion.

  • He sharply contrasted spatial, embodied learning with today’s LLM-dominant paradigm, calling out a fundamental gap that still blocks general intelligence.

 

What I appreciated most about the interview was its tone: ambitious but realistic, hopeful but honest. The future of intelligence won’t arrive in a single dramatic moment. It will come from working through many difficult, often unglamorous problems that we are only beginning to grasp.

 

Author: Prof May El Barachi

Dean of Computer Science & Full Professor, University of Wollongong in Dubai.

Academic leader in digital innovation, applied AI and industry-aligned technology education.

 

Computer Vision: Advancements, Applications, and Future Trends

Computer vision (CV) is a subfield of artificial intelligence that enables machines to process, analyze, and interpret visual inputs such as images and videos. In essence, computer vision algorithms strive to replicate human vision – recognizing objects, people, and scenes in digital imagery and extracting meaningful information. Modern CV encompasses a range of tasks including image classification (identifying what an image contains), object detection (locating and labeling multiple objects in an image), segmentation (precisely outlining objects or regions), and even scene understanding and action recognition. These capabilities have advanced dramatically in the last decade thanks to deep learning techniques and big data. Notably, breakthroughs in neural networks have boosted image recognition accuracy from around 50% to nearly 99% in less than ten years, a quantum leap that showcases the incredible potential of computer vision. This rapid progress, coupled with widespread industry adoption, has led to a booming CV market – valued at about $22 billion in 2023 and projected to exceed $50 billion by 2028. Such growth underscores that computer vision is not only a technical field but also a major driver of business value in the AI era.

 

Image Analysis using CNNs

A major catalyst for the rise of computer vision has been the convolutional neural network (CNN). CNNs are specialized deep learning models explicitly designed for image analysis; they excel at automatically learning hierarchies of visual features from raw pixel data. In a CNN, lower layers detect simple patterns like edges or textures, while deeper layers combine these into higher-level features (such as shapes or object parts), ultimately recognizing complex objects or scenes. This ability to discern intricate patterns has made CNNs the dominant architecture for tasks like image classification and object detection. Ever since AlexNet’s breakthrough in 2012, CNN-based models (e.g. VGG, ResNet, EfficientNet) have continuously pushed the state-of-the-art, enabling machines to classify images and detect objects with superhuman accuracy in some cases. In industry, CNN-powered solutions are ubiquitous – from real-time face recognition in smartphones to defect detection on assembly lines – due to their proven reliability and accuracy. In fact, CNNs have been the primary deep learning model for image processing tasks for much of the 2010s, and they remain fundamental building blocks in computer vision systems.

 

That said, the landscape of CV models is evolving. Vision Transformers (ViTs) and other attention-based architectures have recently emerged as powerful alternatives to CNNs. ViTs apply transformer neural network techniques (originally developed for language) to image patches, modeling global relationships in an image through self-attention. Thanks to this global context modeling, Vision Transformers often match or even exceed CNN performance on image recognition tasks. This development signals a shift: while CNNs still power most production CV applications today, future image analysis systems may increasingly leverage transformers or hybrid architectures for improved accuracy and flexibility. For practitioners and businesses, it’s important to recognize that CNNs continue to be core workhorses in computer vision – especially for edge deployments requiring efficient inference – but new model innovations are on the horizon that could further enhance image analysis capabilities.

 

Image Generation using GANs

Beyond analyzing existing images, computer vision techniques can also generate entirely new images. A landmark innovation in this area was the invention of Generative Adversarial Networks (GANs) in 2014. A GAN consists of two neural networks: a generator that creates synthetic images and a discriminator that evaluates whether images are real or artificially generated. These two networks are trained in tandem in a competitive “game”: the generator tries to fool the discriminator by producing increasingly realistic images, while the discriminator learns to better distinguish fakes from genuine images. Over time, this adversarial training process yields a generator capable of outputting images so realistic that even the discriminator (or a human eye) can hardly tell they are fake.

 

The GAN approach has unleashed a wave of image generation and creative AI applications. Early GAN models could produce blurry handwritten digits or faces, but modern GANs like NVIDIA’s StyleGAN can generate hyper-realistic human faces, artwork, and even video frames. In industry, GANs and their variants are used for tasks such as creating synthetic training data (e.g. generating rare defect images to train inspection systems), enhancing image resolution (super-resolution), and producing photorealistic virtual try-on visuals or game scenery. The technology has also given rise to deepfakes – AI-generated imagery or video impersonations – highlighting both the power and the ethical challenges of image generation AI.

 

Recently, the field has expanded beyond GANs to include other generative methods like diffusion models and transformer-based generators. Text-to-image AI systems (e.g. OpenAI’s DALL-E 3, Stable Diffusion XL) have dramatically improved the quality and realism of generated images from textual descriptions. These generative models can turn a written prompt into a detailed image, enabling new creative workflows in design, advertising, and entertainment. Businesses are already leveraging such tools for content creation – for example, auto-generating product images or marketing graphics tailored to a campaign. In summary, GANs pioneered the era of AI image synthesis, and ongoing advancements (including generative diffusion models) continue to push the boundaries of what computers can imaginatively create in the visual domain.

 

Computer Vision Application Areas

Computer vision is being adopted across many industries. Below are some of the most prominent sectors and use cases where computer vision is making an impact:

  • Manufacturing: Automated visual inspection systems use CV for quality control on production lines, spotting defects or irregularities far more reliably and fast than the human eye. CV also assists in inventory management by scanning and tracking stock items in warehouses. These applications help manufacturers improve yield, reduce waste, and ensure consistency in product quality.

  • Healthcare: In medical imaging, CV algorithms aid doctors by detecting diseases and anomalies in scans (X-rays, MRIs, CT scans) with high accuracy. For example, CV models can highlight potential tumors or pneumonia indicators on X-rays for radiologists. By automating image analysis, computer vision helps diagnose conditions earlier and with fewer errors, and it even guides surgeons in precision robotics and treatment planning.

  • Retail and E-commerce: Computer vision enables innovative retail experiences such as Amazon’s “Just Walk Out” stores, where cameras track what items customers pick up so they can be charged automatically without a checkout line. In e-commerce, CV powers virtual try-on tools (using augmented reality and pose estimation) that let shoppers see how clothing or accessories would look on them before buying. These applications boost customer engagement and sales while reducing return rates.

  • Transportation (Autonomous Vehicles): Self-driving cars and advanced driver-assistance systems rely heavily on computer vision to perceive their surroundings. Cameras (alongside lidar/radar) feed CV models that detect lane markings, traffic signs, signals, pedestrians, and other vehicles in real time. This enables the vehicle’s AI to make safe driving decisions (steering, braking, etc.). Drones and unmanned aerial vehicles similarly use onboard vision for navigation and obstacle avoidance. In the transportation sector, CV is literally the “eyes” of the autonomy revolution.

  • Security and Surveillance: CV enhances security by enabling automated surveillance and detection. For instance, intelligent CCTV cameras can recognize faces or identify suspicious activities without human monitoring. In public safety contexts, computer vision aids in spotting intruders, detecting weapons or accidents, and alerting authorities in real time. While these applications raise privacy concerns, they are increasingly used in airports, stadiums, and smart cities to improve security through automation.

  • Agriculture: Advanced farming employs computer vision via cameras on drones, robots, or tractors to monitor crop health and farm conditions. CV systems can analyze aerial images of fields to identify pest infestations, detect nutrient deficiencies (through leaf color/texture), and estimate crop yields. Targeted actions like precision spraying of herbicides on weeds become possible, making agriculture more efficient and reducing chemical use.

  • Robotics: Many modern robots incorporate vision to interact with the world. Industrial robots use CV to locate and grasp objects on assembly lines, sorting systems use it to recognize and route items, and delivery robots and warehouse AGVs navigate using vision-based SLAM (simultaneous localization and mapping). In fields like healthcare, robotic assistants leverage vision to perform delicate tasks (e.g. surgical robots “see” the operative field). In essence, computer vision gives robots the sensory input needed to operate autonomously and safely alongside humans.

 

Each of the above application areas illustrates how CV is driving tangible value – from cutting costs via automation to enabling entirely new products and experiences. As a result, companies across sectors are investing in computer vision to gain competitive advantages.

 

What is Next in this Field?

Computer vision is evolving rapidly, and several key trends are poised to shape its future in the coming years:

  1. Augmented & Mixed Reality Everywhere: With tech giants releasing consumer-grade AR devices (e.g. Apple Vision Pro, Meta AR glasses), CV is expected to become even more prevalent in daily life. Computer vision will enable these devices to understand the environment – mapping surfaces, recognizing objects and people – so that digital content can be overlaid believably onto the real world. This trend will enhance experiences in retail (interactive shopping), education (immersive learning), gaming/entertainment, and professional training by blending virtual visuals with reality.

  2. Vision-Language and Multimodal AI: The frontier of AI is moving toward multimodal systems that combine vision with other data types (like natural language). By integrating visual understanding with language comprehension, AI agents can interact more intuitively with us and their environments. For example, robots or home assistants with vision-language models can see an object and understand spoken instructions about it (“grab the red book on the table”). Likewise, generative models like CLIP and GPT-4’s vision component allow zero-shot recognition of new objects from text descriptions. This convergence of CV with language and audio will enable more interactive and context-aware AI – think AI customer service that can see a problem via camera, or AR glasses that respond to voice commands and visual cues.

  3. Enhanced 3D Perception: After conquering 2D images, computer vision is increasingly tackling the 3D understanding of environments. New techniques like neural radiance fields (NeRFs) allow AI to construct detailed 3D models of scenes from 2D images. Better depth perception and 3D object recognition will improve applications such as autonomous driving (with more accurate distance and spatial awareness), robotics (better navigation and manipulation in 3D space), and digital twins for industry. We will see CV systems that can not only detect what is in an image, but also understand an object’s shape, size, and position in the world – a crucial step for truly immersive AR/VR and realistic virtual simulations.

  4. Edge Computing and Real-Time Vision: To meet the demand for instant insights, there is a push to run computer vision on the edge – directly on devices like cameras, smartphones, and IoT sensors – instead of in the cloud. By processing visual data on-device, latency is reduced and privacy is improved (the raw images never leave the device). Techniques such as model quantization, pruning, and efficient CNN architectures are enabling high-performance CV in resource-constrained environments. This trend is vital for time-sensitive use cases: for instance, factory robots or self-driving cars cannot afford cloud delays and thus rely on on-board real-time vision. Expect to see more optimized vision AI chips and embedded CV software powering smart cameras, drones, AR glasses, and other edge devices in the near future.

  5. Generative AI for Synthetic Data & Content: As discussed, generative models (GANs, diffusion models) are now capable of producing very realistic images. A major emerging trend is using generative AI to create synthetic training data for computer vision. When real data is scarce or sensitive, companies can generate simulated images (for example, creating thousands of synthetic medical scans or factory defect images) to train CV models without costly manual data collection. Synthetic data can also help overcome biases and privacy issues by augmenting datasets in a controlled way. In addition, generative AI is being used for on-the-fly image augmentation, editing (e.g. removing objects from a scene or changing backgrounds), and even generating entire virtual worlds for simulation. This trend will accelerate model development and unlock new creative applications, as AI can increasingly imagine visuals that are useful for training or content creation.

  6. Advanced Vision Architectures (Transformers & Foundation Models): We are entering an era of foundation models in vision – large pretrained models that can be adapted to many CV tasks. Vision Transformers and hybrid models are leading this charge by offering robust performance across classifications, detections, and segmentations. As noted, ViTs model images in a way that captures global context, and they have started to outperform traditional CNNs for various benchmarks. Meanwhile, tech companies are developing massive vision-language models (like multimodal GPT-style models) that understand images in context of text, and universal segmentation models (like Meta’s Segment Anything Model) that generalize to segment any object. These foundation models in CV can be fine-tuned for specific applications with relatively little data, making computer vision development more accessible and scalable. In the coming years, expect more “generalist” vision AI models that can perform multiple tasks (e.g. describe an image, answer questions about it, detect anomalies, etc.) – analogous to how large language models function – which businesses can harness and customize.

  7. Ethical and Trustworthy Vision AI: As computer vision permeates high-stakes domains (security, healthcare, automotive), there is growing focus on ethics, bias, and safety in CV systems. One aspect is developing methods to detect and counteract deepfakes and manipulated media; CV algorithms themselves are being employed to spot telltale signs of fake images or videos, helping maintain information integrity. Another aspect is addressing bias – for instance, ensuring face recognition works fairly across different demographics and doesn’t invade privacy in unwarranted ways. Regulators and societies are increasingly concerned with how vision AI is used (e.g. surveillance vs. civil liberties), so expect more guidelines and tools for explainable and responsible CV. Techniques like explainable AI for vision (highlighting which image regions influenced a decision) and privacy-preserving vision (blurring faces, federated learning on device) will become standard. In short, the next phase of CV will not just be about what the technology can do, but also implementing it in a transparent, fair, and secure manner.

 

Concluding Remarks

Computer vision has grown from a niche research area into a transformative technology that is fueling innovation across industries. From enabling autonomous machines to unlocking new insights in business data, the ability of AI to interpret visual information is a key component of modern “agentic” AI solutions. Crucially, this field continues to advance at pace – algorithms are getting more powerful, datasets bigger, and computing hardware faster – creating a positive feedback loop of progress. Industry leaders recognize the opportunity: the global CV market is already tens of billions of dollars and attracting heavy investment as organizations seek to improve efficiency, safety, and customer experiences through vision AI.

 

Looking ahead, we can expect computer vision to become even more ubiquitous and integrated into everyday products. Cameras are everywhere in the modern world; with AI, every camera can become a smart sensor that not only records visuals but also understands and reacts to them. This opens the door to smarter cities, smarter homes, and more adaptive intelligent agents all around us. For business leaders and developers, the takeaway is that computer vision is a maturing but still rapidly evolving field – those who stay abreast of the latest CV advancements (from CNNs to transformers, from GANs to generative data augmentation) will be well positioned to build the next generation of AI-driven solutions. In summary, computer vision’s journey is far from over: as it converges with other AI disciplines and we address challenges of ethics and deployment, CV will continue to redefine how machines see the world, and how we in turn interact with an AI-powered visual world.

 

Author: Prof May El Barachi 

Dean of Computer Science & Full Professor, University of Wollongong in Dubai

Academic leader in digital innovation, applied AI and industry-aligned technology education.

Unlocking the Potential of AI in Healthcare

Artificial intelligence is reshaping healthcare by automating routine tasks, enhancing diagnostic accuracy, and enabling proactive care, all while aiming to make complex technological solutions accessible and understandable.

In this edition, we'll dive deeper into how AI simplifies operations for patients, clinicians, and administrators, often completing human-like tasks more efficiently and cost-effectively.

However, it's worth noting that while AI offers tremendous promise, it must be implemented with careful consideration of ethical issues, such as data bias and privacy, to ensure equitable benefits.

Drawing from recent advancements as of 2025, AI is not just a tool but a transformative force that predicts, learns, and acts to reinvigorate modern medicine, from linking genetic codes to powering robotic assistants in surgery.

To illustrate, let's expand on key examples with real-world applications and emerging trends.

 

Early Disease Detection and Diagnosis

  • AI excels in early detection by processing signals from wearables or imaging devices to flag potential issues before they escalate.

  • For someone predisposed to conditions like cardiovascular disease, epilepsy, or diabetes, a smartwatch or sensor could monitor heart rate, blood sugar, or neurological patterns in real-time.

  • An AI model then analyzes this data to predict silent heart attacks, strokes, or seizures, alerting users or doctors promptly.

  • Beyond wearables, AI interprets brain scans with remarkable precision; trained on thousands of images, it can detect stroke timing twice as accurately as human experts, enabling faster interventions that save lives.

  • Similarly, in orthopedics, AI reviews X-rays to identify fractures that might be overlooked (up to 10% of cases by radiologists), minimizing errors and reducing the need for follow-up scans.

  • In disease surveillance, AI models trained on population data can predict over 1,000 conditions, such as Alzheimer's or kidney disease, years in advance by spotting subtle patterns in MRIs or health records.

  • These capabilities are particularly valuable in under-resourced areas, where AI augments limited specialist availability.

 

Remote Monitoring for Chronic Conditions

  • For patients with ongoing illnesses, AI enables continuous oversight without constant hospital visits.

  • Sensors in devices track vital signs, movement, or even cough patterns, feeding data into models that detect anomalies—like a sudden drop in oxygen levels for COPD patients or mood shifts for mental health management.

  • Proactive alerts can summon help during episodes, such as falls or irregular heartbeats, improving quality of life for the elderly or those in remote locations.

  • Tools like AI-powered apps for chronic cough analysis or platforms that reduce emergency visits by 68% through predictive analytics exemplify this, allowing healthcare teams to intervene early and optimize resource use.

  • This approach not only cuts costs but also empowers patients with self-management insights, though integration with human care remains essential to address false positives.

 

Accelerating Medicine Development

  • AI revolutionizes drug discovery by simulating molecular interactions at speeds impossible for humans.

  • By inputting vast datasets on chemical behaviors, disease pathways, and past trials, models predict effective compound combinations for specific conditions, slashing development timelines from years to months.

  • Generative AI further innovates by creating synthetic datasets for rare diseases or modeling drug efficacy, as seen in platforms that simulate personalized treatment responses.

  • In 2025, this has led to breakthroughs in targeted therapies, such as AI-optimized vaccines or antivirals, with tools reducing clinical trial failures by identifying viable candidates early.

  • However, collaboration with traditional research ensures safety, as AI complements rather than replaces expert oversight.

 

Broader AI Applications in Healthcare

  • Expanding beyond these, AI handles administrative burdens through notetaking tools that transcribe consultations using speech recognition and natural language processing, cutting documentation time by up to 70% and allowing doctors more face-time with patients.

  • Clinical chatbots, powered by retrieval-augmented generation, answer medical queries accurately 58% of the time, guiding decisions and reducing readmissions by 30%.

  • In training, AI simulates patient scenarios for medical education, providing personalized feedback to students.

  • For mental health, 24/7 chatbots offer support, tracking symptoms and suggesting coping strategies.

  • Even in ambulances, AI assesses patient needs with 80% accuracy based on vitals and history, aiding paramedics in triage.

  • Integrating traditional medicine, AI catalogs indigenous knowledge to discover new compounds, blending ancient wisdom with modern tech.

 

Medical Data Analysis and Personalized Medicine

  • Moving to medical data analysis and personalized medicine, AI and machine learning tackle the overwhelm of vast, varied datasets that doctors face daily.

  • Traditional analysis of records, images, and histories is time-consuming, but deep learning models (advanced neural networks) excel at processing unstructured data like radiology scans, blood tests, EKGs, genomics, and patient histories.

  • This provides real-time insights, such as flagging abnormalities in images or correlating symptoms with underlying causes, enhancing diagnostic speed and accuracy.

Precision medicine takes this further by customizing treatments to patient subgroups rather than a blanket approach.

  • For ovarian cancer, machine learning analyzed 32 blood markers to identify early-stage patients with poor prognoses, uncovering hidden disease groups and guiding targeted therapies.

  • This data-driven method reveals interactions between variables that hypothesis-based research might miss, especially in multifactorial diseases.

  • AI also predicts drug responses via genetic markers, optimizes dosages to minimize side effects, and reshapes clinical decisions for conditions like autoimmune disorders or cancer.

 

Additional advancements include AI for diabetic retinopathy, where models screen retinal images instantly, reducing wait times from weeks to minutes in underserved areas.

  • In genomics, AI interprets sequences to inform therapy, as in pediatric brain tumors where it identifies subgroups amenable to less invasive treatments, avoiding long-term side effects.

  • For cardiovascular risks, it combines electronic health records with genetics for better predictions.

  • Environmental factors are integrated too, with AI forecasting outbreaks or toxin exposures.

  • Challenges like data bias persist, but synergies between AI and human expertise promise more equitable, effective care.

 

Guardrails that matter (so innovation scales safely)

  • Data quality & governance: unify sources, define stewardship, monitor drift.

  • Bias & equity: validate across demographics; track outcomes, not just accuracy.

  • Privacy & security: least-privilege access, auditability, and privacy-preserving options.

  • Clinical integration: design for the last mile; alert fatigue control, clear accountability, human oversight.

  • Change management: upskill clinicians, update SOPs, and align incentives to value-based care.

 

What leaders should do next

1. Start where data is strong (imaging, triage, documentation).

2. Pick 2–3 high-impact use cases with measurable KPIs (time-to-diagnosis, readmissions, clinician minutes saved).

3. Build the platform, not one-offs: interoperability, monitoring, model registry, reuse.

4. Establish an AI governance council (clinicians, data, legal, ethics, patient reps).

5. Invest in skills: clinicians fluent in data; pharmacists/nurses comfortable with AI tools; engineers who understand clinical workflows.

 

Try this (quick win in 30 days)

  • Pilot an ambient scribe in one clinic; measure note time, after-hours charting, and clinician satisfaction.

  • Add a radiology-assist model for one indication (e.g., fracture detection); track sensitivity/specificity and secondary reads saved.

  • Stand up a lightweight AI registry & monitoring dashboard (ownership, versioning, metrics, drift alerts).

 

Closing

AI in healthcare isn’t about replacing clinicians, it’s about amplifying them. The future of care will be defined not by the machines we build, but by how we use them: to heal, to connect, and to make high-quality care accessible to more people.

 

Author: Prof El Barachi

Dean of Computer Science & Full Professor, University of Wollongong in Dubai

Academic leader in digital innovation, applied AI and industry-aligned technology education.