Connect with us
World's first fully electric plane successfully flies for near 15 minutes World's first fully electric plane successfully flies for near 15 minutes

Tech

World’s first fully electric plane successfully flies for near 15 minutes

In a world’s first, a fully electric commercial plane has taken its inaugural test flight.

Published

on

In Vancouver, Canada, a breakthrough achievement for the famously global warming fuelling aviation industry has just been witnessed, the debut flight of an all-electric plane. The engine was created by Seattle firm Magnix, and the plane outfitted and operated in conjunction with Harbour Air, a Canadian airline mostly focusing on ferrying tourists to Canada’s winter slopes and island resorts.

With a flight lasting just a humble fifteen minutes, this was still an epoch-opening moment due to its nature as a successful proof of concept; that it is possible to get in the air without emitting damaging carbon emissions.

An old favorite of civil aviation, the electric plane itself, is anything but a breakthrough in advanced technology. In fact, it’s over 50 years old, a 1962 de Haviland Beaver seaplane, to be precise, which has been retrofitted with Magnix’s inventive electric engine.

It’s a small plane, with capacity for just six passengers, so it doesn’t require the kind of engine power of a commercial jet. Still, even so, the electric engine was more than enough for the job, with the pilot, Greg McDougal (who also doubles as founder and chief executive of Harbour Air) claiming he actually had to ease up on the power over his regular fuel.

McDougal believes this could be a real boost for civil aviation, and a money saver too, since electric engines are both more economical on energy costs than conventional aviation fuel, as well as requiring much less maintenance, thus saving on the cost of regular repairs.

[pn_btn][/pn_btn]

The limitations of the electric engine’s battery life mean that long haul flights are going to be out of the question for the foreseeable future, the life of the lithium battery giving the e-plane a range of about 100 miles (or 160 kilometers). But this is a perfect range for the kind of short-haul, internal flights which Harbour Air specializes in.

Civil aviation, even in the global warming era, is a growth area with the number of flights nearly doubling from 1998 to 2017, rising to almost 4 billion a year and short-haul flights are amongst the most carbon inefficient ways to travel (far more than long haul and particularly broken up long haul flights).

Even the Canadian government is looking on approvingly, the transport minister Marc Garneau hopeful, if not optimistic, that the e-engine could transform the transport infrastructure of Canada just in time to meet decarbonization targets.

And we’re going to need this technology to take off if we’re going to reduce the 895 billion tonnes of carbon emissions produced every year by global aviation. This is 2% of the world’s emissions, and 12% of all transport emissions, but its effects are heightened by where they are emitted, with the NOX emissions also a contributor to global warming (albeit only in the short term and thus hyper-localized).

Not only that, but recent research suggests that planes’ contrails (long a source of controversy amongst conspiracy theorists) are, in fact, even more damaging than their carbon emissions since they trap heat in the Earth’s atmosphere.

But there’s still a way to go yet before the electric engine overtakes conventional fuel. For one thing, more tests will need to be done to prove that Magnix’s engine is reliable and safe over time.

Even once this is done to the creator’s satisfaction, it will then need approval from aviation regulators before it can enter into use on a commercial rather than an experimental scale. All in all, it could be at least another two years before Greg McDougal can fulfill his aim of outfitted all 40 of his little fleet of seaplanes with e-engines.

In this, he will hopefully see his planes joining the likes of shipping, trains (which are already electrified over most of Europe and many East Asian countries), and increasingly cars in taking to the carbon-neutral electric age. And with global aviation only set to increase and demanding carbon cuts to be met by 2050, the sooner we start that journey, the better.

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Advertisement

Tech

Facebook is back in the operating system game

Eager to break the shackles of Android OS Facebook is aiming to develop an operating system of its own.

Published

on

Facebook is back in the operating system game

A new report released by The Information has detailed how Facebook’s frustration with being at the mercy of rival tech giants for their operating system has spurred their latest quest to develop an exclusive OS for running their own devices.

Although lean on detail of how this OS might work, or what its design might be, the report makes mention of the Oculus Rift and the Portal, both of which are Facebook manufactured, and which also run on their own operating systems (which in both cases are a version of Android with additional modifications).

For a tech giant (Facebook), which is currently at the mercy of another tech giant (Google) for the underlying software (Android) that runs all its proprietary hardware, it’s easy to see the underlying existential threat that may lie in the future. Just as Facebook ate its social media competition alive, Google swallowed Youtube, or Microsoft snapped up Skype, its clear that the growth models of all these companies depends on taking up more and more market share in the digital and tech arena and taking any other company that stands in their way and devouring them whole.

Facebook’s dependence on Android, therefore, gives it a vulnerably soft underbelly exposure to any predatory and anti-competitive inclinations that Google might be entertaining. But the company looks to have plans far bigger than avoiding being eaten by bigger fish. It wants to be the biggest fish (possibly even beyond the realms of tech, digital, and even conventional commerce…).

They even seem to have their eye on nudging into the voice assistant market, having confirmed they were working on their own equivalent to Siri and Alexa last year, proving that their ambitions are absolutely not limited to social media, nor avoiding being swallowed, but that they are deliberately gunning for the biggest badasses in the digital field.

In all likelihood, Facebook is looking to Apple with envious eyes, seeing not only hardware run on Apple’s own OS, but even the chip technology underlying that hardware being owned in house. To that end, The Information suggests that Facebook is looking into custom chip hardware, something hinted at by both Bloomberg and the Financial Times in 2019, which is now looking to being confirmed as a hard fact. The head of development on the project, Shariar Rabii, was poached from Google while one of their OS designers, Mark Lukovsky, was poached from Microsoft, where he had a key role in Windows NT.

For all its ambition, Facebook would do well to remember previous failures in the face of vaulting ambition. Those with long memories may be able to recall an earlier iteration of Facebook OS in 2013 (actually called Home), a disastrous attempt to hijack and rewrite Android for a phone produced by HTC (the HTC First) resulting in an undesirable phone flooded with unwanted Facebook feeds, a reduction in price by AT&T to just 99 cents one month after going on the market, and a whole lot of pissed of customers.

In addition to existing hardware to run off the new OS, Facebook have previously announced plans for new smart glasses, in a plan to enter a developing market before it comes to ‘replace the smartphone,’ taking up 50% of all smartphone users within the next five years as Ericsson have (most probably hyperbolically) predicted. Facebook’s Orion glasses are slated to arrive in 2023 (the same year that Apple’s equivalent will be dropping, curiously enough…), and the new generation of AR glasses are a creepy step beyond the current crop, with plans for them to be operated by what amounts to mind control.

Given the HTC First, we’re not sure we’d want Facebook’s OS to get direct access to our minds, the idea of being greeted to the Facebook home screen inside our own heads is more than a little unsettling. But that’s up to future customers. That’s if, given Facebook’s history of swallowing their competition, they’re even given the choice…

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Continue Reading

Tech

Authorities begin to use facial recognition scraping billions of pictures

If you have images of your face online you may need to start worrying.

Published

on

Authorities begin to use facial recognition scraping billions of pictures

As revealed by New York Times reporter Kashmir Hill, hundreds of police departments and law enforcement agencies across the US have begun signing up to access the services of Clearview AI.

Clearview is a company which provides specialized facial recognition services, based on a database of three billion images sourced from across the internet, including social media sites such as Facebook and YouTube, in what could well be a breach of the sites’ terms of service (Twitter explicitly bans images from its website for facial recognition).

From this database, Clearview is then able to run astonishingly efficient searches on any given image of an individual’s face, finding matching images (and their source pages) from across the internet. This may include social media accounts, where they may have given away more information about themselves than just their appearance, (job, address, hobbies, hours of work, who their friends and family are, etc.).

Now, this may sound pretty terrifying, but there are good arguments for why it may be an incredible tool for law enforcement. The Times’ report showed that already the use of Clearview had led to arrests of suspects in shoplifting, identify theft, credit card fraud, murder, and child sexual exploitation cases, even leading to one arrest within twenty minutes. This technology could revolutionize detective work and policing. But it could also transform our relationship with the state – and not in a good way.

And that is precisely why companies and governments have fought shy of using such technology. Google has suspended research in the area and are encouraging a temporary ban, and while Francisco’s police department are specifically forbidden from using facial recognition software. Even in the last few days, the EU has mooted the possibility of a five-year moratorium on facial recognition in spite (or perhaps because) of the inexorable advance of sophisticated facial recognition technology. Its use by police in the UK has already led to debates about its legality, and how fit for purpose other hi-tech new tools of detection are when it comes to protecting public freedoms. Seventy years after George Orwell’s death, now might not be the most appropriate time to usher in an all-seeing Big Brother state (or would it…?).

Law enforcement agencies have already been compiling their own databases of images for use with their own facial recognition software. They have been doing so for the best part of two decades, but senior politicians in the UK have challenged even this. There are also numerous practical issues to consider, especially around the risk of false-positive identifications and the many studies that show most software’s ‘rampant’ racial bias when it comes to misidentification.

The New York Times has been doing sterling work busting open the almost accidental private sector surveillance state that digital companies have been building up around us, sharing their findings at their Privacy Project. Still, Clearview is a new and particularly worrying case.

They seem to be a rogue operator, with shaky publicly available company information and little grasp of the legality or otherwise of their own information-gathering techniques (and a willingness to access images uploaded by law enforcement agencies, against promises not to, something researcher Kashmir Hill discovered after she asked her image to be used by cooperating police).

Combine this loose sense of legal propriety with many police departments limited grasp of the Clearview technology they are using (and the fact that no independent authority, such as the National Institute of Standard in Technology has yet vetted this software for public use), and you get even deeper into risky territory where horrible miscarriages of justice and invasions of privacy might occur.

There are also worries around escalation. Can facial recognition software be kept only to law enforcement? If not, the ability to quickly and easily get personal data on someone you have a photo of could become a boon for fraudsters, blackmailers, and identity thieves, not to mention stalkers or similar predators.

But it seems there is little politicians can do, or are willing to do, stopping the relentless march of the AI. If software can do it, some rogue entrepreneurs will do it, seems to be the inevitable logic. And so we all march on into dystopia, one misjudged app at a time.

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Continue Reading

Tech

Leaked data proves just how much our phones are spying on us

Think your phone isn’t tracking you and sharing your every digital move? Think again.

Published

on

Leaked data proves just how much our phones are spying on us

Think your phone isn’t tracking you and sharing your every digital move? Think again. It definitely is. That’s according to research shared by the New York Times based on their investigation of leaked data.

The investigation was coordinated by the New York Times Privacy Project and used a leak from a location data company, one of many unknown businesses from an under-reported industry dedicated to using electronic data to track every single one of us everywhere we go.

This data included over 50 billion data points from a period of a few months in 2016 to 2017, gathered from everything from weather or local news apps to coupon saving sites. Each one of these points represented data from one of 12 million Americans’ phones. And this wasn’t mere harmless metadata, of interest only to algorithms and advertisers. No, this was data so specific that it allowed the investigating team from the New York Times to identify and then to track individuals, including celebrities, government officials, other investigative journalists, and one poor engineer who worked for a competitor.

[pn_btn][/pn_btn]

Both collecting and selling this data is currently perfectly legal in the US, as part of the lucrative ‘location data’ industry. The recipients of the data, the companies claim, are heavily vetted, although in whose hands that vetting process rests is perhaps worth questioning. The companies also claim that the data is secure (which this leak would seem to dispute…), that the data is anonymous (although investigation can easily lead to jigsaw identification) and that it is collected with users consent (but do users know what they’re consenting to?).

There are also no guarantees on whether individual data analysts working for these companies are preventing from abusing their access to all this valuable data, for tracking an ex-partner, for instance. As with the vetting of partner companies to share data with, none of this is subject to the kid of scrutiny or oversight or other, and less intrusive, industries. Even then, US legal sanctions for data breaches are weak by international standards. Whether this is a loophole, an oversight or a deliberate ploy by a government in the pocket of this new lucrative industry is hard to say, and perhaps a matter of perspective.

But there is no doubt that, for a data analyst, identifying individuals from supposedly anonymized geolocation data is relative child’s play. A regular journey from a domestic address to an office, factory, commercial unit, or the like could easily be seen as a commute. Repeat that from the same phone five times a week, and you have an individual’s workplace and home address. Electoral roll data could give you a name, and then all the other geolocation data from that phone gives you there every single movement and whereabouts – 24/7 and 365.

The New York Times team used the data leak to do just that, identifying key military personnel, law enforcement officials and high powered lawyers, all the kinds of people who terrorists, foreign governments or criminal cartels might want to target, and following their every move, learning their buying habits, their commute, where their children went to school, all from nothing more than location data.

Critics of this industry say that they have created history’s most sophisticated surveillance system, and done so accidentally, without any certain idea of what to do with it (aside from haphazard attempts at monetization). But if we wanted to look at how a creepy authoritarian regime might look to employ such data, we need look no further than China’s social media influenced ‘good citizen’ Social Credit system, which awards points to individuals who behave in a way the government deems positive, points that incur social privilege and access to social resources (or, conversely, lock non-conformist individuals out of access to basic social and materials needs).

Even in a democracy, who this data belongs to, and who has access to it, matters (and even democracies are not always to be trusted). Imagine the national security implications of enemy regimes or terrorists gaining leaked or purchased access to a constant record of the movements of key members of government (or authoritarian regimes their domestic or emigré critics).

So props to the New York Times for exposing this terrifying prospect, but we’re not sure we can forgive for the nights of lost sleep contemplating just what to do about it.

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Continue Reading

Tech

Meet the A.I that sees how we see

A new AI analyzes images by learning from how we humans look at our world.

Published

on

Meet the A.I that sees how we see

Most of us are familiar with the somewhat imperfect visualizing skills of AIs, as typified by those examples where, when asked to see dogs, for example, they see them everywhere, or they struggle to define what is or isn’t a human face (particularly hilarious when it comes to face swap apps).

Part of the problem is that when we humans look at a picture, or at the world around us, we are processing light, and our brain is using key identifiers and visual cues to put together a reasonable sense of what it is we are taking in. An AI, by comparison, is merely looking at thousands upon thousands of pixels and trying to make sense of the relationship between them, through finding patterns in the whole, without the mental short cuts that humans are so used to using to find which elements in an image are important and what they might represent.

This is why a team at Duke University and MIT Lincoln laboratory have chosen to use a different approach, teaching their AI the tricks of the trade that we humans take for granted with a program called Birdwatcher.

Where previously AIs would have been force-fed a diet of the images they were looking for (ie, dogs’ faces) the Duke and MIT Lincoln team taught it to look for parts of images, in this case, parts of birds, using 11,788 images of 200 different species, and to feedback its observations and conclusions to the team.

By teaching the AI that a woodpecker might be identified by either a red plume or black and white breast feathers, they allowed it to use the visual compositing skills that help humans to quickly identify what an image might be, using the advance deep learning technique that has seen such breakthroughs across a variety of AIs.

[pn_btn][/pn_btn]

The team also taught the AI that certain visual features might be common to several types of birds and gave it the discrimination skills needed to decide which, based on the visual evidence, might be the likely candidate in the picture before it.

The machine then displayed the image and highlighted the detail it had used to identify the particular bird type, with its reasoning for doing so, to allow the human operators to decide for themselves whether it had chosen correctly. This wasn’t just to measure whether the test was working; this is fundamental to the model of how this AI would work in practice.

AIs can be used for quickly processing information, in times powers of magnitude greater than anything a human mind could manage. But trusting them to make decisions alone, particularly in matters of life and death, would be a step too far, particularly since human fallibility in programming them can never be ruled out.

These visualizing AIs could be useful in medicine, for spotting cancers, for instance, and the ability to present a human expert with what they have seen and why they have identified it allows for a last line of defense in preventing misdiagnosis. It also allows the users working with the AI to observe its ‘thinking’ in a way that previous AIs just haven’t offered, meaning that any flaws in their thinking would be arduous, if not impossible, to identify or investigate.

Using Birdwatcher, the AI scored an 84% success rate, matching any other rivals in the field, but besting them due to this facility for sharing snapshots explaining how it reached its conclusions.

Recent research has shown that shortcuts humans use to process visual stimulus means that we can identify images with as little as 13 milliseconds of viewing time.

They can lead to hasty misidentifications (ever waved at someone only to realize it wasn’t the person you thought?) but also more serious issues, such as hallucinations, particularly as seen in Charles Bonnet syndrome, where blind spots in a person’s vision can be filled with vivid hallucinations based on what your brain thinks you should be seeing.

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Continue Reading

Tech

The world’s first 3D-printed neighborhood is being built in Mexico

A charity built an entire neighborhood using a 3D printer.

Published

on

The world's first 3D-printed neighborhood is being built in Mexico

Of all the advances of modern science, 3D printers are among the most impressive. We have long been promised that a 3D printer – coupled with materials, expertise, and a boatload of patience – can create anything.

Some users are happy with a whistle or a plastic toy. Healthcare professionals have taken the idea a step further, constructing surgical transplant materials – including a skull. Now, a nonprofit organization dedicated to helping the poor and needy has 3D-printed an entire neighborhood.

Visit the outskirts of Southern Mexico, and you’ll find an oversized 3D printer. Over thirty feet in length, this marvel belongs to New Story. The charity has used the printer to create houses that measure 500 square feet.

Can people live in these houses?

Yes – that’s the whole point of the endeavor. New Story aims to use 3D-printed homes to solve the global housing crisis for those living in poverty. This is just the first step on a longer journey.

Fifty of the town’s poorest families will initially be moving in and will own their homes outright. Each of the homes will boast two bedrooms, a living room, a kitchen, and a bathroom. Naturally, plumbing and electricity will also be provided. This is a substantial upgrade on the current living arrangements of the town.

Most of the citizens earn just over $75 a month and live in wooden shacks. Outdoor restrooms, no water, and flooding during the rainy season are common. These concerns will become a thing of the past when the homes are complete.

[pn_btn][/pn_btn]

New Story are no strangers to building homes to aid those in need. In nations such as Bolivia, El Salvador, and Haiti – a territory so impoverished that citizens eat mud just to survive –

Volunteers have built traditional bricks-and-mortar homes.

It was the experience of Haiti in particular that inspired New Story to look into faster construction techniques. Devastated by an earthquake in 2010, the people of Haiti needed homes as a matter of urgency. This inspired New Story to look into ways to speeding up the construction process and cutting costs.

New Story are not working alone

Naturally, good intentions and honest endeavors only go so far –, especially for a charitable organization. The New Story team are dedicated and talented, but there are limits to what can realistically be achieved single-handedly.

This led New Story to partner with Icon, a Texan company that specializes in construction tech. It was Icon that provided the super-sized 3D printer, dubbed the Vulcan II. While other printers have been designed to build a home, this neighborhood is the first to pull off the ambitious endeavor.

Obviously, the ethical implications of the local economy have also been taken into consideration. New Story and Icon have also teamed up with Echale a Tu Casa, a local nonprofit construction company. This enabled local workers to complete building works beyond the capabilities of 3D printing.

This was not an easy process

As you can probably imagine, the process of building these homes was beset by challenges.

The town that hosts the homes remains unnamed for the privacy of the residents. Another territory was initially chosen, but bureaucratic red tape meant that Plan B was required. The town is also located on an earthquake fault line, ensuring that a wide range of safety tests were needed.

Once the project was signed off and approved, more logistical issues arose. The Vulcan II took three months to clear customs and enter Mexico from its Austin origin, By the time it arrived, construction was delayed by flooding. Mexico’s rainy season is famously problematic.

Eventually, construction was ready to start. This was achieved by applying layers of concrete through the 3D printer. This creates the walls and floors of the home. The quality of the printing needs to be carefully managed and monitored, depending on the weather conditions.

The first two homes were completed simultaneously in 24 hours – albeit with this timeslot spread over several days. This was born of a desire to work during daylight hours. The intention is to speed the process up further in the future.

Eventually, the plan is to expand the reach of 3D homes throughout the globe. A complex of 400 square feet apartments are scheduled for Austin, with the aim of providing accommodation for the city’s homeless population.

New Story seeks to change the entire face of the construction industry. Based on these results in Mexico, the sky is the limit as to what can be achieved.

Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.

Continue Reading
Advertisement