Wellness That Matters: Black Health News & Community Care
Why the rise of AI means we all need to slow down and think before we repost
We’re living in a time where it’s getting harder and harder to tell what’s real.
That viral video? Could be AI.
That “new research” article? Might be made up.
Even the person who “wrote” that blog post might not exist at all.
With tools like Google’s Veo and the next generation of AI models, we’re entering a space where images, audio, video, and full news stories can be entirely generated. They can look beautiful, sound convincing, and have zero human truth behind them. That might sound like innovation to some, but to those of us in public health, that sounds like a crisis.
At our recent Power Play event, Latroya Hester, the founder of Comms Noire, reminded us of something we already know deep down: misinformation moves fast, especially when it hits our emotions. We don’t always check what we’re sharing. We read a headline and repost. We see a video that looks dramatic or urgent and we send it to the group chat. We want to help. We want to do something. But sometimes doing nothing is actually the smartest move.
Here’s your first tip:
You do not have to share.
Take a breath. Ask yourself: Who made this? Where did it come from? Who benefits if I believe it?
This wave of AI-generated content isn’t just showing up in ads and entertainment. It’s showing up in healthcare. And when bad data or misinformation sneaks into conversations about reproductive care, chronic illness, menopause, or rare disease, it can cause real harm.
Here’s what you can do right now:
- Pause before you repost.
Ask yourself if the source is credible. Did the person sharing it link to the original study or article? Does it sound too good or too wild to be true? If so, it probably is. - Be critical of images and videos.
We are officially in the era where “seeing is believing” doesn’t always apply. AI can now generate videos of people saying things they never actually said. Before you assume something is real, look for context and confirmation. - Check with trusted voices.
Stick to platforms and people you trust, especially when it comes to health information. If you’re unsure, visit sources like BWHI.org, the CDC, or ask a healthcare professional. Your cousin’s Facebook post does not count as peer-reviewed. - Call it out when you see it.
If someone in your circle is sharing questionable content, it’s okay to gently check in. You don’t need to argue. A simple “Hey, not sure that’s legit—do you know where it came from?” can go a long way. That’s how we protect each other. - Remember that tech doesn’t have values. People do.
AI can’t know what matters to our communities unless we shape the technology, ask hard questions, and push for ethical standards. If we’re not involved, the systems will keep repeating the same gaps, the same erasure, and the same harm.
We can’t afford to stay passive in a world that’s moving this fast.
Information is only power if it’s true.
So next time you feel like hitting share… take a beat.
Think twice.
And maybe just close the app and drink some water.
Read the full article on the original site