AI cameras could cut down traffic deaths, but there may be flaws

Last updated: 12-10-2019

Read original article here

AI cameras could cut down traffic deaths, but there may be flaws

AI cameras could cut down traffic deaths, but there may be flaws
Tech News
AI cameras could cut down traffic deaths, but there may be flaws
(TECH NEWS) Traffic accidents have plagued humanity since motor vehicles were created, can AI help cut down on text and drive incidents?
Published
Brittany Vance, Staff Writer
What if we told you Australian officials believe they have found a way to reduce driving deaths by almost 30% in just two years? It’s a pretty appealing concept. After all, Australia alone faces an average of over 3 deaths a day due to driving accidents. And Australia’s average death rate clocks in at just half of what we face in the United States.
There’s just one problem with Australia’s proposed solution: it’s basically Big Brother.
Basically, Australia plans to use AI cameras to catch people texting and driving. There are plenty of places that have outlawed texting and driving, but that rule is very hard to enforce – it basically means catching someone in the act. With AI cameras, hands free driving can be monitored and fined.
Australia has already started rolling out some of these systems in South Wales . Because this is a new initiative, first time offenses will be let off with a warning. The following offenses can add up quickly, though, with fines anywhere from $233 to $309 USD . After a six month trial period, this program is projected to expand significantly.
But there are real concerns with this project.
Surprisingly, privacy isn’t one of these worries. Sure, “AI cameras built to monitor individuals” sounds like a plot point from 1984, but it’s not quite as dire as it seems. First, many places already have traffic cameras in order to catch things like people running red lights. More importantly, though, is the fact these machines aren’t being trained to identify faces. Instead, the machine learning for the cameras will focus on aspects of distracted driving, like hands off the wheel.
The bigger concern is what will come from placing the burden of proof on drivers. Because machine learning isn’t perfect, it will be paired with humans who will review the tagged photographs in order to eliminate false positives. The problem is, humans aren’t perfect either. There’s bound to be false positives to fall through the cracks.
Some worry that the imperfect system will slow down the judicial system as more people go to court over traffic violations they believe are unfair. Others are concerned that some indicators for texting while driving (such as hands off the wheel) might not simply apply texting. What if, for instance, someone was passing a phone to the back seat? Changing the music? There are subtleties that might not be able to be captured in a photograph or identified by an AI.
No matter what you think of the system, however, only time can tell if the project will be effective.
Brittany is a Staff Writer for The American Genius with a Master's in Media Studies under her belt. When she's not writing or analyzing the educational potential of video games, she's probably baking.
Continue Reading
Your email address will not be published. Required fields are marked *
Comment
Tech News
DeepComposer: AWS’ piano keyboard turns AI up to 11
(TECH NEWS) Amazon has been busy with machine learning, which includes a camera, a car, and now DeepComposer that’s able to add to classics on the fly
Published
Mary Ann Lopez, Staff Writer
Musicians, listen up, there’s a new kid in town, its name is DeepComposer and it’s coming to take your creativity and turn it up to 11.
Artificial Intelligence has taken a leap into what has long been considered the “pinnacle of human creativity”, as Amazon revealed what is said to be the world’s first machine learning -enabled keyboard capable of creating music.
Amazon unveiled its AWS DeepComposer keyboard Monday during AWS re:Invent, a learning conference Amazon Web Services hosted for the global cloud computing community in Las Vegas.
Demonstrating DeepComposer’s abilities, Dr. Matt Wood, Amazon’s VP of Artificial Intelligence, played a snippet of Beethoven’s “Ode to Joy” and then let the keyboard riff on it with drums, synthesizer, guitar, and bass, sharing a more rockin’ version of the masterpiece.
Generative AI, is considered by scientists at MIT to be one of the most promising advances in AI in the past decade, Wood told the crowd. Generative AI allows for a machine not only to learn from example, as a human would but to take it next level and connect the dots, making the next creative step to composing something completely new.
“It [Generative AI] opens the door to an entire world of possibilities for human and computer creativity, with practical applications emerging across industries, from turning sketches into images for accelerated product development, to improving computer-aided design of complex objects, Amazon said on its AWS re:Invent website .
How does it work? The Generative AI technique pits two different neural networks against each other to produce new and original digital works based on sample inputs, according to Amazon. The generator creates, the discriminator provides feedback for tweaks and together they create “exquisite music”, Wood explained.
A user inputs a melody on the keyboard, then using the console they choose the genre, rock, classical, pop, jazz or create your own and voila, you have a new piece of music. Then, if so desired users can share their creations with the world through SoundCloud.
This is the third machine learning teaching device Amazon has made available, according to TechCrunch. It introduced the DeepLens camera in 2017 and in 2018 the DeepRacer racing cars. DeepComposer isn’t available just yet, but AWS account holders can sign up for a preview once it is.
Tech News
AI: Attempts to understand and regulate it fail because we’re human
(TECH NEWS) NYC created an AI task force to attempt to understand and regulate growing AI, but it failed because we are humans with odd motives and slower thinking
Published
Brittany Vance, Staff Writer
Artificial intelligence (AI) is racist . Okay, fine, not all AI are racist (#NotAllAI), but they do have a tendency towards biases . Just look at Microsoft’s chatbot, Tay , which managed to start spewing hateful commentary in under 24 hours. Now, a chatbot isn’t necessarily a big deal, but for better or worse, AI is being used in increasingly crucial ways.
A biased AI could, for instance, change how you are policed, whether or not you get a job, or your medical treatment.
In an attempt to understand and regulate these systems, New York City created a task force called the Automated Decision Systems (ADS) Task Force. There’s just one problem: this group has been a total disaster.
When it was formed in May of 2018, the outlook was hopeful. ADS was comprised of city council members and industry professionals in order to inspect the city’s use of AI and hopefully come to meaningful calls for action. Unfortunately, the task force was plagued with troubles since the beginning.
For example, although ADS was created to examine the automated systems New York City is using, they weren’t granted access. Albert Cahn, one of the members of the task force, said that although the administration had all the data they needed, ADS was not given any information.
Sounds frustrating, right? One potential reason for this massive hiccup is the fact the administration was relying on third party systems which were sold by companies looking to make a profit. In this case, it makes sense that a company would like to avoid being scrutinized: it could easily lead to greater regulation or at the very least, a broader understanding of how their systems really worked.
Another overarching problem was the struggle to define artificial intelligence at all. Some automated systems do rely on complex machine learning, but others are far simpler. What counts as an automated system worth examining? The verdict is still out.
In the big scheme of things, AI tech is still in its infancy. We’re just starting to grasp what it’s currently capable of, much less what it could be capable of accomplishing. To add to the complications, technology is evolving at a break-neck speed. What we want to regulate now might be entirely obsolete in ten years.
Then again, it might not. Machines might be fast to change, but their human creators? Less so. Left unchecked, it’s debatable about whether or not creators will work to remove biases.
NYC’s task force might have failed – their concluding write-up was far from ideal – but the creation of this group signals a growing demand for a closer look at the technology making life-changing decisions. The question now? Who is best suited for this task.
Brittany Vance, Staff Writer
If you haven’t heard of TikTok by now, you’re probably living in a remote corner of the Alaskan wilds, far out of the range of internet access. (That or you just don’t know any teenagers.) TikTok is a video sharing app, users can record videos that are 15 seconds or less.
It’s also super popular . TikTok was the third most downloaded non-game app this year, with over 1.5 billion downloads since it was launched two years ago.
There’s just one problem: the app is owned by a Chinese company, Bytedance, and United States officials are growing concerned about how much data the foreign company is able to collect. According to NBC , the U.S. Committee on Foreign Investment reached out to Bytedance in early November over fears of how the company was storing users’ data. TikTok has also been accused of censoring political content.
In response, Bytedance has moved to ringfence TikTok, essentially separating TikTok from the rest of the business operations. Their product and development, marketing, and legal teams have been completely severed from the rest of the overall operations.
They are also working to increase teams in the United States. Since 2017, U.S. TikTok employees have jumped from 20 to 400. That number might grow even faster with a potential team in California to help cover data management.
And speaking of data, TikTok insists that all U.S. data is stored on servers in the United States, with a backup server in Singapore. This information includes users’ age, location, email address and uploaded video content. Some data is input by users themselves, but other aspects, like location, are recorded automatically.
Although TikTok is making changes, there are still many questions about the operation as a whole. Both the United States and China have opened security investigations over the data Bytedance collects and how it is handled.
The United State’s probe has also focused on TikTok’s censorship, which has included removing or marking private videos that mention things like Tiananmen Square . Facebook CEO Mark Zuckerberg has also spoken out against TikTok’s censorship.
All of this reveals some of the complexities of running an international business, especially in light of President Trump’s trade wars with China. TikTok is left to balance often opposing desires from different governments.
On a personal scale, though, this also serves as a reminder to be aware of who has access to your data and what that might mean.


Read the rest of this article here