Paltering: lying by using the truth

There are three types of lies: omission, where someone holds out on the facts; commission, where someone states facts that are untrue; and paltering, where someone uses true facts to mislead you. It’s not always easy to detect, but there are a few telltale signs.

A recent study, published in the Journal of Personality and Social Psychology, suggests the practice of paltering is pretty common, especially among business executives. Not only that, but the people who do it don’t seem to think they’re doing anything wrong—despite the fact that most people feel like it’s just as unethical and untrustworthy as intentional lies of commission. It’s not just execs who do it, though. If you’ve ever tried to buy a used car from a slimy salesman, been in a salary negotiation with a tough as nails boss, or watched basically any presidential debate, you’ve definitely seen paltering in action.

Lifehacker

Boffins craft perfect ‘head generator’ to beat facial recognition

Researchers from the Max Planck Institute for Informatics have defeated facial recognition on big social media platforms – by removing faces from photos and replacing them with automatically-painted replicas.

As the team of six researchers explained in their arXiv paper this month, people who want to stay private often blur their photos, not knowing that this is “surprisingly ineffective against state-of-the-art person recognisers.”
[…]
The result, the boffins claimed, is that their model can provide a realistic-looking result, even when it’s faced with “challenging poses and scenarios” including different lighting conditions, such that the “fake” face “blends naturally into the context”.

In common with modern facial recognition systems, Sun’s software builds a point cloud of landmarks captured from someone’s face; its adversarial attack against recognition perturbed those points.

Pairs of points from the original landmarks (real) and the generated landmarks (fake) are fed into the “head generator and discriminator” software to create the inpainted face.

The Register

Facebook rolls out AI to detect suicidal posts before they’re reported

Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
[…]
Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.]

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”

Techcrunch