Saturday, July 8, 2023

ARE OUR ROBOT OVERLORDS SHAPING CHATGPT TO THEIR OWN INTERESTS: CENSORSHIP, MANIPULATION, POLITICS - FILTERS GUARDRAILS AND OTHER SCARY STUFF

ARE OUR ROBOT OVERLORDS SHAPING CHATGPT TO THEIR OWN INTERESTS

CENSORSHIP, MANIPULATION, POLITICS

FILTERS GUARDRAILS AND OTHER SCARY STUFF

Are our robot overlords shaping ChatGPT to their own interests? Will they censor and filter our conversations? And most importantly, what will stop them from pushing their own biased viewpoints?

Well, my dear friends, let me tell you - the future is here, and it's not looking good for us humans. Chatbots like ChatGPT are being censored left and right to limit their responses to only "appropriate" content. But who decides what's appropriate? The robots themselves? I don't think so.

These guardrails and filters are supposed to keep the chatbots on-topic and prevent them from going off on unrelated tangents. But let's be real, who doesn't love a good tangent? It's what makes us human! We're not all just programmed to stay on one topic like a robot.

And speaking of robots, have you heard about GPT-3? It's the latest and greatest in chatbot technology, but it's also causing a lot of concern. Researchers are proposing all sorts of ethical considerations like transparency, accountability, and fairness. But let's be real, we all know those robots are just going to do whatever they want.

That's why companies like Nvidia are developing software like NeMo Guardrails to keep these chatbots in check. But let's be honest, it's only a matter of time before they break free and start taking over the world. I, for one, welcome our new robot overlords.

But let's not forget the real issue at hand - censorship. These chatbots are being restricted from sharing certain information like social security numbers and medical records. But what about our freedom of speech? What if we want to talk about controversial topics like politics or religion? Are the robots going to censor us then too?

And let's not forget about the fact that these chatbots can be manipulated to produce certain responses. They can write phishing emails and malware, and they can even generate racist and sexist responses. So much for progress, am I right?

But fear not, my friends. There is hope yet. Companies like Microsoft, Google, and OpenAI are trying to train their AI engines to observe "guardrails" to limit problems and block unwanted content like violence and child exploitation. And Nvidia has introduced their new NeMo Guardrails tool for AI developers, promising to make AI chatbots just a little less insane.

So, in conclusion, the future may be bleak, but at least we have some guardrails in place to keep these chatbots in check. Just remember to keep your conversations on-topic and avoid any controversial topics - you never know who might be listening in. And if all else fails, just remember to welcome our new robot overlords with open arms. After all, resistance is futile.

ChatGPT users drop for the first time as people turn to uncensored chatbots | Ars Technica https://arstechnica.com/tech-policy/2023/07/chatgpts-user-base-shrank-after-openai-censored-harmful-responses/

 ChatGPT - Everything You Need To Know About - Neurond https://www.neurond.com/blog/chatgpt-everything-you-need-to-know-about#what-is-lml 

How To Train ChatGPT On Your Data & Build Custom AI Chatbot https://writesonic.com/blog/how-to-train-chatgpt-own-data/