Hi, I’d like to request a feature that allows to
open existing GDocs document and
insert a paragraph by by appending to the body of the document.
I would like to create a flow that pastes several AssemblyAI/Whisper transcripts into one GDocs document and then sends it to another LLM for correction. I use similar flow to transcribe voice notes for doctors in a medical setting. It saves them a lot of time - instead of sitting at a computer they record notes.
I wholeheartedly support this feature request, but I feel a responsibility to point out that using Activepieces and/or LLM’s for the usecase you describe is irresponsible.
Giving medical data to Activepieces is explicitly forbidden by section 14.2 of the terms of service. I’m not a lawyer, but this is almost certainly a serious violation of HIPAA, GDPR or equivalent data protection regulation in your jurisdiction. Sharing this data with an LLM provider has similar privacy concerns, especially given their habit of recycling user input as training data.
Secondly, current LLM technology is notoriously prone to hallucinations. Unless every single transcript is reviewed by a human I would strongly advise against using this (very cool) technology in a setting where inaccuracies can have severe consequences for someone’s health.
Thank you for your support, but I think your further comment is an overstatement and you are describing a different case. I appreciate the concern about the data correctness (it’s justified) in this case the LLM product is then independently assessed by two people with medical education.
My experience with using LLM so far has been very positive, because it turns out that the models are able to find inaccuracies in these notes. Not to mention how much the efficiency of the doctor increases - he is not dependent on the working time of human staff and does not have to deal with office work, focusing on his specialty, in which he trained for 10 years or more.
I’d love to hear more details that prove me wrong, but the workflow described in your original post is a very clear violation of the terms of service and (depending on your jurisdiction) data protection laws. I don’t see how this can be an overstatement. These rules exist for very good reasons. It’s in the best interest of you, your employer/customer and their patients that they’re followed.
If Activepieces, your account or the server you’re hosting it on are ever breached, your patients’ sensitive medical information could be compromised. At least the first two of those systems don’t have adequate protection measures as required in the healthcare sector. Do you really want to be responsible if this goes wrong?
Just because it has worked well so far doesn’t mean it’s a good idea. If an LLM ever makes a mistake that slips by one of the reviewers (which is pretty easy when they weren’t the one treating them) a patient’s health could be seriously impacted. As someone who’s had issues with inaccurate information in their medical file before, please, just don’t. At least not with the current generation of LLM’s.
These are all great things and worthwhile endeavours to pursue, but this isn’t the way to implement them. IT security, especially in healthcare, is no joke, leave this to the professionals.
I did not write that I upload any medical data to LLM. LLM’s task is to correct transcriptions of some recordings of notes about some patient (mainly spelling, punctuation, grammar and text formatting).
In your country, are doctors required to leave their phones when they talk to a patient? If not, why not? After all, phones listen for voice commands and collect this data.