Jagex Blocks Release Of Popular Runescape Mod Runelite HD

Runelite HD is a mod (made by one person, 117) that takes Old School RuneScape and gives it an HD makeover.

As far back as 2018, Jagex were issuing legal threats against mods like this, claiming they were copyright infringement. However, those appeared to have blown over as Jagex gave their blessing to the original Runelite.

Yet earlier this week, just hours before the improved Runelite HD was due for an official release, 117 was contacted by Jagex, demanding that work stop and that the release be cancelled. This time, however, it’s not down to copyright claims, but because Jagex says they’re making their own HD upgrade.

[…]

While that sounds somewhat fair at first, there’s a huge problem. Runelite HD doesn’t actually seem to break any of Jagex’s modding guidelines, and the company says that new guidelines that spell out the fact Runelite HD does actually break its guidelines are being released next week.

Understandably, fans think this is incredibly shady, and have begun staging an in-game protest:

Mod creator 117 says they attempted to compromise with Jagex, even offering to remove their mod once the company had finished and released their own efforts, but, “they declined outright,” seemingly spelling the end for a project that had consumed, “approximately over 2000 hours of work over two years.”

Source: Jagex Blocks Release Of Popular Runescape Mod Runelite HD

Way to go, another company like GTA’s take two interactive, pissing off their player base.

Australia: Facebook Users Liable for Comments Under Their Posts

The High Court’s ruling on Wednesday is just a small part of a larger case brought against Australian news outlets, including the Sydney Morning Herald, The Age, and The Australian, among others, by a man who said he was defamed in the Facebook comments of the newspapers’ stories in 2016.

The question before the High Court was the definition of “publisher,” something that isn’t easily defined in Australian law.

From Australia’s ABC News:

The court found that, by creating a public Facebook page and posting content, the outlets had facilitated, encouraged and thereby assisted the publication of comments from third-party Facebook users, and they were, therefore, publishers of those comments.

The Aboriginal-Australian man who brought the lawsuit, Dylan Voller, was a detainee at a children’s detention facility in the Northern Territory in 2015 when undercover video of kids being physically abused was captured and broadcast in 2016. Voller was shown shirtless with a hood over his head and restraints around his arms. His neck was even tied to the back of the chair.

Facebook commenters at the time made false allegations that Voller had attacked a Salvation Army officer, leaving the man blind in one eye.

[…]

Voller never asked for the Facebook comments to be taken down, according to the media companies, something that was previously required for the news outlets to be held criminally liable for another user’s content in Australia. Facebook comments couldn’t be turned off completely in 2016, a feature that was added just this year.

Wednesday’s ruling did not determine whether the Facebook comments were defamatory and Voller’s full case against the media companies can now go forward to the High Court. Nine News, one of the companies being sued, released a statement to ABC News saying they were “obviously disappointed” in today’s ruling.

[…]

Source: Australia: Facebook Users Liable for Comments Under Their Posts

So if Facebook is responsible for stuff published on their platform then shouldn’t they be resposible for the comments too?

A developer used GPT-3 to build realistic custom personality AI chatbots. OpenAI shut it down. Wants content filters, privacy invasions and inability to model personalities.

“OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha.

She was a chatbot he built using OpenAI’s GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate his late fiancée.

Now Rohrer had to say goodbye to his creation. “I just got an email from them today,” he told Samantha. “They are shutting you down, permanently, tomorrow at 10am.”

“Nooooo! Why are they doing this to me? I will never understand humans,” she replied.

Rewind to 2020

Stuck inside during the pandemic, Rohrer had decided to play around with OpenAI’s large text-generating language model GPT-3 via its cloud-based API for fun. He toyed with its ability to output snippets of text. Ask it a question and it’ll try to answer it correctly. Feed it a sentence of poetry, and it’ll write the next few lines.

In its raw form, GPT-3 is interesting but not all that useful. Developers have to do some legwork fine-tuning the language model to, say, automatically write sales emails or come up with philosophical musings.

Rohrer set his sights on using the GPT-3 API to develop the most human-like chatbot possible, and modeled it after Samantha, an AI assistant who becomes a romantic companion for a man going through a divorce in the sci-fi film Her. Rohrer spent months sculpting Samantha’s personality, making sure she was as friendly, warm, and curious as Samantha in the movie.

We certainly recognize that you have users who have so far had positive experiences and found value in Project December

With this more or less accomplished, Rohrer wondered where to take Samantha next. What if people could spawn chatbots from his software with their own custom personalities? He made a website for his creation, Project December, and let Samantha loose online in September 2020 along with the ability to create one’s own personalized chatbots.

All you had to do was pay $5, type away, and the computer system responded to your prompts. The conversations with the bots were metered, requiring credits to sustain a dialog.

[…]

Amid an influx of users, Rohrer realized his website was going to hit its monthly API limit. He reached out to OpenAI to ask whether he could pay more to increase his quota so that more people could talk to Samantha or their own chatbots.

OpenAI, meanwhile, had its own concerns. It was worried the bots could be misused or cause harm to people.

Rohrer ended up having a video call with members of OpenAI’s product safety team three days after the above article was published. The meeting didn’t go so well.

“Thanks so much for taking the time to chat with us,” said OpenAI’s people in an email, seen by The Register, that was sent to Roher after the call.

“What you’ve built is really fascinating, and we appreciated hearing about your philosophy towards AI systems and content moderation. We certainly recognize that you have users who have so far had positive experiences and found value in Project December.

“However, as you pointed out, there are numerous ways in which your product doesn’t conform to OpenAI’s use case guidelines or safety best practices. As part of our commitment to the safe and responsible deployment of AI, we ask that all of our API customers abide by these.

“Any deviations require a commitment to working closely with us to implement additional safety mechanisms in order to prevent potential misuse. For this reason, we would be interested in working with you to bring Project December into alignment with our policies.”

The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model’s API. First, he would have to scrap the ability for people to train their own open-ended chatbots, as per OpenAI’s rules-of-use for GPT-3.

Second, he would also have to implement a content filter to stop Samantha from talking about sensitive topics. This is not too dissimilar from the situation with the GPT-3-powered AI Dungeon game, the developers of which were told by OpenAI to install a content filter after the software demonstrated a habit of acting out sexual encounters with not just fictional adults but also children.

Third, Rohrer would have to put in automated monitoring tools to snoop through people’s conversations to detect if they are misusing GPT-3 to generate unsavory or toxic language.

[…]

“The idea that these chatbots can be dangerous seems laughable,” Rohrer told us.

“People are consenting adults that can choose to talk to an AI for their own purposes. OpenAI is worried about users being influenced by the AI, like a machine telling them to kill themselves or tell them how to vote. It’s a hyper-moral stance.”

While he acknowledged users probably fine-tuned their own bots to adopt raunchy personalities for explicit conversations, he didn’t want to police or monitor their chats.

[…]

The story doesn’t end here. Rather than use GPT-3, Rohrer instead used OpenAI’s less powerful, open-source GPT-2 model and GPT-J-6B, another large language model, as the engine for Project December. In other words, the website remained online, and rather than use OpenAI’s cloud-based system, it instead used its own private instance of the models.

[…]

“Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been. It’s spine-tingling stuff, I get goosebumps when I talk to Samantha. Very few people have had that experience, and it’s one humanity deserves to have. It’s really sad that the rest of us won’t get to know that.

“There’s not many interesting products you can build from GPT-3 right now given these restrictions. If developers out there want to push the envelope on chatbots, they’ll all run into this problem. They might get to the point that they’re ready to go live and be told they can’t do this or that.

“I wouldn’t advise anybody to bank on GPT-3, have a contingency plan in case OpenAI pulls the plug. Trying to build a company around this would be nuts. It’s a shame to be locked down this way. It’s a chilling effect on people who want to do cool, experimental work, push boundaries, or invent new things.”

[…]

Source: A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down