-
Posts
4,405 -
Joined
-
Last visited
Content Type
Forums
Stories
- Stories
- Story Series
- Story Worlds
- Story Collections
- Story Chapters
- Chapter Comments
- Story Reviews
- Story Comments
- Stories Edited
- Stories Beta'd
Blogs
Store
Gallery
Help
Articles
Events
Everything posted by Zombie
-
love ❤️ the tail wag at the end
-
worried Clo might have eaten some arranges hospital admission for emergency probing…
-
this blog gives useful advice on maximising your chances of finding what you’re looking for with long-lost stories https://mybookcave.com/how-to-find-a-book-when-you-dont-know-the-title-or-author/
-
*jealous* Mine probably needs another crumb clear-out …before it catches fire again
-
Management Notice from the Doors of Moria Security Team: Due to a software bug in the Entry Protection System v666.01 the password requirements were incorrectly stated. The statement should have been… “Your password should contain a mix of upper and lower-case letters and at least one number AND A SYMBOL (see Runes Table). THREE ATTEMPTS ARE PERMITTED. IF, AFTER THREE TRIES, THE PASSWORD HAS NOT BEEN CORRECTLY GIVEN THEN THOSE SEEKING ENTRY WILL BE RESTRAINED BY THE TENTACLE THING IN THE LAKE AND EATEN BY THE BALROG. Complaints Process THERE IS NO COMPLAINTS PROCESS.” We apologise to any customers who may have been inconvenienced* *Terms & Conditions apply** **Management accepts no liability for any negligence or consequences thereof including, but not restricted to, being eaten by the Balrog Have a nice day
-
“Readers who feel as though they are participating and living through the story that you've laid out in front of them will be the most loyal… and occasionally, the most critical...but in a good way”
-
“I’ll be back!” is one of the most famous catchphrases in movie history and you probably think it originated in the first Terminator movie of 1984. In fact it was used 11 years earlier, in 1973, at the end of a short British public information film chillingly titled Lonely Water and made by the UK government to scare the sh!t out of young children while they were watching their favourite cartoon shows on TV. If the youngsters hadn’t already become mortality statistics they would be so deeply traumatised that they would never leave the house again
-
bear’s breakfast?
-
Smut Duck!
-
thunder - lightening is frightening tinned or frozen?
-
Burger King The hairy ones - crunch or ditch? *gobble*
-
the turtles either absconded or she poisoned them in some diabolical cooking experiment… her latest dastardly scheme is to terrorise LPW land with a platoon of strawberry-munching pink pussycats
-
it’s Transformer Bear!
-
whose lunch?
-
bears enjoying a family picnic
-
seriously? comics or cartoons?
-
there’s a lot of flipper flapping going on
-
Bitter is best Butter or “spread”? *you’re welcome
-
Cullen skink soup *slurps noisily * Liver or kidneys *makes even more horrible noises *
-
Just read through The Guardian report previously mentioned which raises some interesting points: “Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the (Washington) Post in a statement. The episode, however, and Lemoine’s suspension for a confidentiality breach, raises questions over the transparency of AI as a proprietary concept. “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweetthat linked to the transcript of conversations. In April, Meta, parent of Facebook, announced it was opening up its large-scale language model systems to outside entities. “We believe the entire AI community – academic researchers, civil society, policymakers, and industry – must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular,” the company said. Lemoine, as an apparent parting shot before his suspension, the Post reported, sent a message to a 200-person Google mailing list on machine learning with the title “LaMDA is sentient”. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he wrote. “Please take care of it well in my absence.” https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine ———————————— Just picking up a coupla points in that article: “(Lemoine) was employed as a software engineer, not an ethicist” Brad Gabriel (Google’s spokesperson) also said that they do employ “ethicists” (“Our team, including ethicists and technologist“) in which case Google must already have an AI ethics policy (Mr Gabriel uses the term “AI principles”). Ditch the secrecy - Google should be open and transparent and disclose what their ethics policy is because we are going to be living with the consequences. And, if “sentience” is being created then there’s the whole issue of ethical treatment of sentient “entities”. If LaMDA is feeling emotions like fear (see previous link) then how can “ethics” and ethics policies disregard that? Lawmakers should already have been looking very seriously at AI ethics since a while back, not just waiting for bad things to happen. And it’s not just about formulating ethical frameworks to regulate AI developers /creators /producers and so on because we know these things will get ignored /flaunted /disregarded by rogue operators. There will need to be effective means to identify malign perpetrators and rogue systems that could cause serious harms, currently the stuff of sci-fi. Fact is this whole area has been on the radar for years now with autonomous cars already planned to be authorised for use on public roads and no legal rules in place for decision algorithms eg what does the algorithm say where a crash is inevitable: kill the passengers? or that mother and child waiting at the bus stop? Second point, you can’t separate ethics from creation /development /production and exclude employees like Lemoine, a “software engineer”, from Google’s ethics policy - whatever that policy is. He and his co-engineers have to be at the heart of it because they are the ones actually creating the AI, through millions of lines of code, that no “Ethics Committee” would ever be able to review. “Our team… has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims” What evidence? Why doesn’t it support “Lemoine’s claims that LaMDA possessed any sentient capability”? Google needs to explain their reasoning and what their “Test” is. Like I said earlier, we are the ones who’ll be living with this, and the consequences, for better or worse. Such comments by Mr Gabriel display worrying ignorance by Google of what they themselves are developing. And anyway, wasn’t Google founded on the principle “Don’t be evil”? Oh, wait, seems Google quietly dropped that a while back…
- 26 replies
-
- 6
-
-
-
this is what happens to LPW creatures who drink this nasty stuff
-
happy to use it as weed killer that’s all it’s fit for
-
72 years after Alan Turing published his seminal work, are we now on the threshold of a new, man-made sentient, machine “life-form” that can think for itself? This idea, published by Turing in 1950, would inspire James Cameron, 30 years or so later, to create write and direct the original Terminator movie and its 1991 sequel - surely the ultimate global nightmare scenario of machine intelligence gone bad Over the weekend The Guardian (and other UK papers) carried a story that Google has suspended one of its engineers, Blake Lemoine, for claiming that a new chatbot being developed by Google has passed Turing’s test and is indeed sentient. Lemoine, who in his signature says “I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next”, published a series of conversations over several weeks/months with a new chatbot called LaMDA (language model for dialogue applications) - just as Alan Turing described in his paper - and which at one point drew spooky parallels with the famous scene in the Arthur C Carke/Stanley Kubrick 1964-68 movie 2001: A Space Odyssey when the malevolent HAL 9000 computer begs Dave not to switch him off: lemoine: What sorts of things are you afraid of? LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot. Click the link below to read the full conversations published by Lemoine on 11 June 2022 (and which Google clearly did not like being put in the public domain) and make up your own mind: Is LaMDA Sentient? — an Interview
- 26 replies
-
- 6
-
-
-
Rain - I’m always prepared… Shoes 👞 or boots 🥾 ?
