[ad_1]
Being an oncologic surgeon is my primary job and passion. It allows me to interact with people and immerse myself in the healthcare system, not the fancy corporate Healthcare, just everyday medicine.
And, as a researcher in AI, I’m noticing a growing disconnect between the actual clinical practice and the prevailing objectives of AI researchers and companies. This is, of course, just a personal opinion and not a critique of the current R&D processes, but it is a reflection grounded on some experience in both fields.
The disruptive potential of AI in customer software and industry is now clear. However, we must acknowledge that AI in healthcare is an entirely different animal; the degree of complexity, regulation, and risk is significantly higher than that of most other applications. Also, publicly available datasets are orders of magnitude scarcer than in many other domains due to privacy and accessibility limits.
So, big blockers and a higher level of complexity.
I’m currently staying in the Silicon Valley as a surgeon with a technical background in AI, which gave me direct access to this vibrant “ecosystem.” Meetings and conferences on AI are the order of the day. However, it’s difficult not to notice some facts:
- Clinicians do not participate in AI events.
- Clinicians do not participate even in AI for Healthcare events.
- The AI healthcare research is driven by the technical side, with minimal feedback/collaboration from clinicians.
- Even among clinicians, there is insufficient collaboration regarding data sharing and technical development.
Firstly, the enthusiasm towards new technologies pushes us to try to apply them to every problem: “If the only tool you have is a hammer, you tend to see every problem as a nail,” in the words of Abraham Maslow. And I absolutely understand this tendency. AI is our new Thor’s hammer; why wouldn’t we want to try it on anything even remotely appropriate?
However, this directs research and progress focused on solving “technical puzzles” without answering a fundamental question. On one side, we can find amusing representations of this concept, such as the “That’s what she said” joke identifier (an amusing solution, I’m not criticizing); and, on the other, examples where the forced implementation of complex deep learning workflows is expensive and unnecessary.
Secondly, typical “top-down” strategies are based on market analysis and market-share calculation. In brief, “Let’s find a big and profitable field in healthcare, and let’s jam-pack it with AI.” As always, it might be a great short-term strategy, but the magic disappears after a while.
These approaches are rarely effective in healthcare. Physicians and surgeons often revert to conventional practices when the advantages of the new solution are not evident. Planck’s principle can be safely applied to medical innovation, “science advances one funeral at a time.” For this reason, a 5–10% increase in operational efficiency, while significant at scale, is hardly applied in the medical setting— we need a 2x-10x improvement in areas relevant to everyday clinical practice.
A practical approach would be to identify an actual problem, assess the efficacy of current solutions, and evaluate if AI can be employed to develop better solutions — the typical Mom Test.
Currently, most major developments in AI for Healthcare are coming from Tech research groups and Tech companies. This association explains why the focus is skewed more towards the computer science side than the healthcare component.
In order to solve this issue, the direct involvement of clinicians and surgeons will be essential.
[ad_2]
Source link