Alimenter la prochaine génération d'applications de soins de santé avec les modèles d'IA MedGemma de pointe de Google DeepMind.
医療テキストと画像分析のためのMedGemma 4B ITモデルの力を体験
Upgrade to Pro version to unlock advanced AI models, unlimited conversations, professional medical image analysis and more powerful features
MedGemma MedGemma est une collection de modèles d'IA de pointe conçus spécifiquement pour comprendre et traiter les textes et images médicaux. Développé par Google DeepMind, MedGemma représente une avancée significative dans le domaine de l'intelligence artificielle médicale.
Basé sur la puissante architecture Gemma 3, MedGemma a été optimisé pour les applications de santé, fournissant aux développeurs des outils robustes pour créer des solutions médicales innovantes avec MedGemma.
Dans le cadre des Health AI Developer Foundations, MedGemma vise à démocratiser l'accès à la technologie d'IA médicale avancée, permettant aux chercheurs et développeurs du monde entier de créer des applications de santé plus efficaces avec MedGemma.
Launched at Google I/O 2025
Released as part of Google's ongoing efforts to enhance healthcare through technology
Powerful capabilities designed for medical applications
Processes both medical images and text with 4 billion parameters, using a SigLIP image encoder pre-trained on de-identified medical data.
Optimized for deep medical text comprehension and clinical reasoning with 27 billion parameters.
Build AI-based applications that examine medical images, generate reports, and triage patients.
Accelerate research with open access to advanced AI through Hugging Face and Google Cloud.
Enhance patient interviewing and clinical decision support for improved healthcare efficiency.
Implementation guides and adaptation methods
MedGemma models are accessible on platforms like Hugging Face, subject to the terms of use by the Health AI Developer Foundations.
# Example Python code to load MedGemma model
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/medgemma-4b-it")
model = AutoModelForCausalLM.from_pretrained("google/medgemma-4b-it")
Use few-shot examples and break tasks into subtasks to enhance performance.
Optimize using your own medical data with resources like GitHub notebooks.
Integrate with tools like web search, FHIR generators, and Gemini Live.
Choose the right deployment method based on your requirements:
Run models locally for experimentation and development purposes.
Deploy as scalable HTTPS endpoints on Vertex AI through Model Garden for production-grade applications.
MedGemma models are not clinical-grade out of the box. Developers must validate performance and make necessary improvements before deploying in production environments.
The use of MedGemma is governed by the Health AI Developer Foundations terms of use, which developers must review and agree to before accessing models.
Questions courantes sur MedGemma
Le modèle multimodal 4B traite les images et textes médicaux, tandis que le modèle 27B se concentre sur le traitement de texte et le raisonnement clinique.
Non, les modèles MedGemma nécessitent validation et améliorations avant déploiement en environnements de production.