Tiny Pacemakers Could Be Game Changers for Heart Patients

Tiny, new pacemakers are making headway around the world. One type, the Micra, is keeping 15,000 people’s hearts beating in 40 countries, according to manufacturer Medtronic. One of those people is Mary Lou Trejo, a senior citizen who lives in Ohio. 

A healthy heart has its own pacemaker that establishes its rhythm, but people like Trejo need the help of an artificial device.

Trejo comes from a family with a history of heart disease. Her heart skipped beats, and she could feel it going out of rhythm. Trejo wanted to do something to advance heart health, so in 2014, she volunteered to participate in a clinical trial for the Micra pacemaker. The device is 24 millimeters long implanted, one-tenth the size of traditional pacemakers.

Traditional pacemakers

Most pacemakers rely on batteries placed under the skin, usually just below the collarbone. Sometimes patients get infections after the surgery or have difficulty healing from the incision.

Traditional pacemakers use leads with electrodes on one end that are threaded through blood vessels to connect to the heart. There can be problems with the leads as well.

Dr. Ralph Augostini at Ohio State’s Wexner Medical Center says a tiny pacemaker like the Micra avoids all of these problems. 

“The electrodes are part of the can, and therefore it eliminates the lead,” he said. There’s no incision in the chest to become infected and no chance of complications with the leads.

Small and self-contained

Augostini implanted Trejo’s pacemaker in 2014. He threaded the entire device thorough an artery in her leg up to her heart. The pacemaker has small, flexible tines that anchor it into the folds of the heart muscle. Once it’s in place, the doctor gives it a tug to make sure the pacemaker is stable before removing the catheter used to place it in the heart.

The Wexner Medical Center was one of the sites that participated in the Micra clinical trial. Since the Micra received FDA approval in 2016, Medtronic has been training more physicians on the procedure. A company spokesman told VOA that this device is becoming available at other centers across the U.S. and countries throughout the world.

Dr. John Hummell, a cardiologist at the Wexner Medical Center, has studied the effectiveness of this new generation of pacemakers. 

“We don’t leave any wires behind and the pacemaker, the battery, the wire is all just a tiny little piece of metal sitting down in the heart,” he said. Medtronic said the results of the clinical trial showed a success rate of 99.6 percent.

Dr. Richard Weachter, with the University of Missouri Health Care, says the leadless pacemakers’ complication rates are about half the rate of traditional pacemakers.

The battery lasts for 14 years and after that, Weachter said, doctors can implant another one in the same chamber of the heart. They can repeat the procedure a third time if needed.

The pacemaker activates only when necessary to keep the heart beating normally. Studies show that the Micra and other leadless pacemakers are safe and effective.

These tiny pacemakers are not right for all patients, but as the technology develops, more people will be able to benefit from the procedure. Four years after her implant, Trejo’s doctors say she is doing fine.

Tiny Pacemakers Could Be Game Changers for Heart Patients

Some new, tiny pacemakers are making headway around the world. One type is keeping 15,000 people’s hearts beating in 40 countries, according to the manufacturer. Studies show these small pacemakers are safe. And, as VOA’s Carol Pearson reports, doctors expect the technology will help more heart patients over time.

Big Rigs Almost Driving Themselves on the Highway

Four automakers in Japan, including Mitsubishi and Isuzu, have road-tested a form of driverless technology. The big rigs are all equipped with a type of adaptive cruise-control system as a step toward removing the one feature you’d expect to see in the cab: a driver. Arash Arabasadi reports.

Robot Drives Itself to Deliver Packages

Delivery robots could one day be part of the landscape of cities around the world. Among the latest to be developed is an Italian-made model that drives itself around town to drop off packages. Since the machine runs on electricity, its developers say it is an environmentally friendly alternative to fuel powered delivery vehicles that cause pollution. VOA’s Deborah Block has more.

Facebook Forges Ahead With Kids App Despite Expert Criticism

Facebook is forging ahead with its messaging app for kids, despite child experts who have pressed the company to shut it down and others who question Facebook’s financial support of some advisers who approved of the app.

Messenger Kids lets kids under 13 chat with friends and family. It displays no ads and lets parents approve who their children message. But critics say it serves to lure kids into harmful social media use and to hook young people on Facebook as it tries to compete with Snapchat or its own Instagram app. They say kids shouldn’t be on such apps at all — although they often are.

“It is disturbing that Facebook, in the face of widespread concern, is aggressively marketing Messenger Kids to even more children,” the Campaign For a Commercial-Free Childhood said in a statement this week.

Lukeward reception

Messenger Kids launched on iOS to lukewarm reception in December. It arrived on Amazon devices in January and on Android Wednesday. Throughout, Facebook has touted a team of advisers, academics and families who helped shape the app in the year before it launched.

But a Wired report this week pointed out that more than half of this safety advisory board had financial ties to the company. Facebook confirmed this and said it hasn’t hidden donations to these individuals and groups — although it hasn’t publicized them, either.

Facebook’s donations to groups like the National PTA (the official name for the Parent Teacher Association) typically covered logistics costs or sponsored activities like anti-bullying programs or events such as parent roundtables. One advisory group, the Family Online Safety Institute, has a Facebook executive on its board, along with execs from Disney, Comcast and Google.

“We sometimes provide funding to cover programmatic or logistics expenses, to make sure our work together can have the most impact,” Facebook said in a statement, adding that many of the organizations and people who advised on Messenger Kids do not receive financial support of any kind.

Common Sense a late addition

But for a company under pressure from many sides — Congress, regulators, advocates for online privacy and mental health — even the appearance of impropriety can hurt. Facebook didn’t invite prominent critics, such as the nonprofit Common Sense Media, to advise it on Messenger Kids until the process was nearly over. Facebook would not comment publicly on why it didn’t include Common Sense earlier in the process. 

“Because they know we opposed their position,” said James Steyer, the CEO of Common Sense. The group’s stance is that Facebook never should have released a product aimed at kids. “They know very well our positon with Messenger Kids.”

A few weeks after Messenger Kids launched, nearly 100 outside experts banded together to urge Facebook to shut down the app , which it has not done. The company says it is “committed to building better products for families, including Messenger Kids. That means listening to parents and experts, including our critics.”

Wired article unfair?

One of Facebook’s experts contested the notion that company advisers were in Facebook’s pocket. Lewis Bernstein, now a paid Facebook consultant who worked for Sesame Workshop (the nonprofit behind “Sesame Street”) in various capacities over three decades, said the Wired article “unfairly” accused him and his colleagues for accepting travel expenses to Facebook seminars. 

But the Wired story did not count Lewis as one of the seven out of 13 advisers who took funding for Messenger Kids, and the magazine did not include travel funding when it counted financial ties. Bernstein was not a Facebook consultant at the time he was advising it on Messenger Kids.

Bernstein, who doesn’t see technology as “inherently dangerous,” suggested that Facebook critics like Common Sense are also tainted by accepting $50 million in donated air time for a campaign warning about the dangers of technology addiction. Among those air-time donors are Comcast and AT&T’s DirecTV.

But Common Sense spokeswoman Corbie Kiernan called that figure a “misrepresentation” that got picked up by news outlets. She said Common Sense has public service announcement commitments “from partners such as Comcast and DirectTV” that has been valued at $50 million. The group has used that time in other campaigns in addition to its current “Truth About Tech” effort, which it’s launching with a group of ex-Google and Facebook employees and their newly formed Center for Humane Technology.

Could Mining, Analyzing Social Media Posts Prevent Future Massacres?

In multiple online comments and posts, Nikolas Cruz, 19, the suspect in the Valentine’s Day high school shooting in Florida, apparently signaled his intent to hurt other people.

I want to “shoot people with my AR-15,” a person using the name Nikolas Cruz wrote in one place. “I wanna die Fighting killing…ton of people.”

As investigators try to piece together what led to the school shooting that left 17 people dead and many others wounded, they are closely examining the suspect’s social media activity, as well as other information about him.

The focus on Cruz’s digital footprint highlights a question that law enforcement, social scientists and society at large have been grappling with: If anyone had been paying attention to his postings, could these deaths have been prevented?

The FBI was contacted about a social media post in which the alleged gunman says he wants to be a “professional school shooter.”

However, though the commenter’s username was “Nikolas Cruz” — the same name as the shooting suspect — the FBI couldn’t identify the poster, according to the Associated Press.

But what if an algorithm could have sifted through all of Cruz’s posts and comments to bring him to the attention of authorities?

Data mining

In an era where data can be dissected and analyzed to predict where cold medicine will most likely be needed next week or which shoes will be most popular on Amazon tomorrow, some people wonder why there isn’t more use of artificial intelligence to sift through social media in an effort to prevent crime.

“We need all the tools we can get to prevent tragedies like this,” said Sean Young, executive director of the University of California Institute for Prediction Technology.

“The science exists on how to use social media to find and help people in psychological need,” he said. “I believe the benefits outweigh the risks, so I think it’s really important to use social media as a prevention tool.”

Despite the 2002 movie Minority Report, about police apprehending murderers before they act based on knowledge provided by psychics known as “precogs,” the idea of police successfully analyzing data to find a person preparing to harm others is still a far-off scenario, according to experts.

Predictive policing

Increasingly, police departments are turning to “predictive policing,” which involves taking large data sets and using algorithms to forecast potential crimes and then deploying police to the region. One potential treasure trove of data is social media, which is often public and can indicate what people are discussing in real time and by location.

Predictive policing, however, comes with ethical questions over whether data sets and algorithms have built-in biases, particularly toward minorities.

A study in Los Angeles aims to see if social media postings can help police figure out where to put resources to stop hate crimes. 

“With enough funds and unfettered data access and linkage, I can see how a system could be built where machine learning could identify patterns in text [threats, emotional states] and images [weapons] that would indicate an increased risk,” said Matthew Williams, director of the social data science lab and data innovation research institute at Cardiff University in Wales. He is one of the Los Angeles study researchers.

“But the ethics would preclude such a system, unless those being observed consented, but then the system could be subverted.”

Arjun Sethi, a Georgetown law professor, says it is impossible to divorce predictive policing from entrenched prejudice in the criminal justice system. “We found big data is used in racially discriminating ways,” he said.

Using Facebook posts

Still, the potential exists that, with the right program, it may be possible to separate someone signaling for help from all the noise on social media.

A new program at Facebook seeks to harness the field of machine learning to get help to people contemplating suicide. Among millions of posts each day, Facebook can find posts of those who may be suicidal or at risk of self-harm — even if no one in the person’s Facebook social circle reported the person’s posts to the company. In machine learning, computers and algorithms collect information without being programmed to do so.

The Facebook system relies on text, but Mark Zuckerberg, the company’s chief executive, has said that the firm may add photos and videos that come to the attention of the Facebook team to review.

Being able to figure out if someone is going to harm himself, herself or others is difficult and raises ethical dilemmas but, says Young of UCLA, a person’s troubling social media posts can be red flags that should be checked out.

Belgian Court Orders Facebook to Stop Collecting Data

Belgian media say a Brussels court has ordered Facebook to stop collecting data about citizens in the country or face fines for every day it fails to comply.

The daily De Standaard reported Friday that the court upheld a Belgian privacy commission finding that Facebook is collecting data without users’ consent.

It said the court concluded that Facebook does not adequately inform users that it is collecting information, what kind of details it keeps and for how long, or what it does with the data.

It has ruled that Facebook must stop tracking and registering internet usage by Belgians online and destroy any data it has obtained illegally or face fines of 250,000 euros ($311,500) every day it delays.

When Will Robots Work Alongside Humans?

Most analysts and economists agree, robots are slowly replacing humans in many jobs. They weld and paint car bodies, sort merchandise in warehouses, explore underground pipes and inspect suspicious packages. Yet we still do not see robots as domestic help, except for robotic vacuum cleaners. Robotics experts say there is another barrier that robots need to cross in order to work alongside humans. VOA’s George Putic reports.

White House Blames Russia for ‘NotPetya’ Cyber Attack

The White House on Thursday blamed Russia for the devastating “NotPetya” cyber attack last year, joining the British government in condemning

Moscow for unleashing a virus that crippled parts of Ukraine’s infrastructure and damaged computers in countries across the globe.

The attack in June of 2017 “spread worldwide, causing billions of dollars in damage across Europe, Asia and the Americas,” White House Press Secretary Sarah Sanders said in a statement.

“It was part of the Kremlin’s ongoing effort to destabilize Ukraine and demonstrates ever more clearly Russia’s involvement in the ongoing conflict,” Sanders added. “This was also a reckless and indiscriminate cyber attack that will be met with international consequences.”

The U.S. government is “reviewing a range of options,” a senior White House official said when asked about the consequences for Russia’s actions.

Earlier on Thursday, Russia denied an accusation by the British government that it was behind the attack, saying it was part of a “Russophobic” campaign that it said was being waged by some Western countries.

The so-called NotPetya attack in June started in Ukraine where it crippled government and business computers before spreading around Europe and the world, halting operations atports, factories and offices.

Britain’s foreign ministry said in a statement released earlier in the day that the attack originated from the Russian military.

“The decision to publicly attribute this incident underlines the fact that the UK and its allies will not tolerate malicious cyber activity,” the ministry said in a statement.

“The attack masqueraded as a criminal enterprise but its purpose was principally to disrupt,” it said.

“Primary targets were Ukrainian financial, energy and government sectors. Its indiscriminate design caused it to spread further, affecting other European and Russian business.”