Vehicle manufacturer giants Rolls Royce have confirmed that a deal has been sealed with Google, with the joint aim to introduce and develop ‘Intelligent Awareness Systems’ for the next generation of autonomous vessels.
The joint venture will see Rolls Royce utilising Google’s cloud-based apparatus, with a view to creating an Artificial Intelligence system that can be used to detect and track objects on the oceans. With the use of software that creates tailored teaching models that will soon be accessible through the cloud, it will then be used to construct and analyse the data received at sea. This future technology is believed to be an essential tool for the future development of independent vessels at sea.
When the project has been completed, not only will it be used for future generations of ships, it will also be added to current vessels, with the long-term view of making ships and their crews safer whilst adding efficiency into the mix.
The system will be able to provide a ship’s crew with a more advance understanding of their surroundings, by using the data that is gained from existing onboard systems, such as automatic identification and radar.
Leading the ‘Advanced Autonomous Waterborne Applications Initiative’ (AAWA), and working alongside many companies within various industries and academic partnerships, Rolls Royce are the current market leaders with their continuing work in the field of futuristic shipping.
More recently, Rolls Royce have also introduced new plans for the development of autonomous naval vessels, following interest in their work from many Naval forces around the world. The unveiling of designs of a 60 metre long vessel with a nautical range of 3500 miles. Their ultimate design goal has been for single missions with the specific aim of patrol and surveillance, including the detection of mines and screening and monitoring for naval fleets. The Royal Navy has also been looking at the future of autonomous vessels and future fleet developments (see Engineering News (September 2017)).
Meanwhile, in Norway, engineers are continuing to progress on the cargo ship “Yara Birkeland”, designed by Marin Teknikk, which is due to launch in 2020. Yara Birkeland will make global history as the first zero emission autonomous cargo ship – scale model trials for the vessel began last month in Trondheim, Norway.
A 3D selfie app has been developed by a group of computer scientists at the University of Nottingham and Kingston University. The scientists have investigated how to extrapolate intricate details that have previously seen experts in both vision and computer graphics defeated in their research efforts. A new development has been made, with the use of new technology, and it is now possible to produce 3D facial reconstructions using a single 2D image, and from that successfully rendering a 3D composite image.
The introduction of the new web based app will allow users to upload their single coloured images, receiving their 3D image a few seconds after uploading. More than 400,000 users have already used the online app with successful results.
Led by PhD students Aaron Jackson and Adrian Bulat who are both researchers at the Computer Vision Laboratory in the School of Computer Science at Nottingham, have worked on their “Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression”, in collaboration with Dr Valsileios Agryious from the school of Computer Science and Mathematics at Kingston University. Although their findings still need a certain amount of fine tuning, the research has been dubbed as a breakthrough.
The developments have been made using what is referred to as a ‘Convolutional Neural Network’ (CNN). Neural Networks are an area of Artificial Intelligence that involves machine learning through experience, which then gives machines the capability to learn rather than being programmed – in much the same way as our brains work. The team, which was supervised by Dr Yorgos Tzimiropoulos, started training a CNN using a large amount of 2D pictures as their data, along with 3D facial models. Using this wealth of information, the CNN was able to reconstruct the 3D facial features of the subjects and generate facial measurements from the single 2D images.
Tzimiropoulas explained in a recent paper that the “main novelty” of this research, is how simple their approach has been, as it sidesteps the complexity used by other techniques. By training the large neural network using 80,000 faces, the system can then produce 3D facial images from the original 2D images.
Present methods to obtain 3D images require numerous facial images, which creates many technical challenges throughout the process. These include dense correspondences over large faces, facial expressions and non-uniformed light – all of which interfere with the final result.
Jackson clarified that their CNN uses a 2D image of a face that can be expressing various features, ie: profile, front facing, smiling. Following this clarification, Bulat was keen to point out that their method also includes the reconstruction of the ‘non-visible’ sections of the 2D image, and uses a little machine guesswork to fill in the blanks.
The method that the team are using simply demonstrates another possible advancement in deep learning; by creating a machine that can use artificial neural networks that copy the way the brain works, making connections with various pieces of information.
With these advancements, the technology could be used in computer games, enabling users to personalise their own settings to provide increased realism. The system could also be used by consumers when shopping for accessories including jewellery, headwear and other optical items. With further investigation and research, the system could be used to enhance medical simulations for patients undergoing plastic surgery, as well as aiding the understanding of medical conditions such as autism and depression.
The results of this research are to be unveiled at the International Conference on Computer Vision (ICCV) 2017, which will be held in Venice this November.
A policy for the ‘Future American Leadership in Space’ has been issued following President Trump’s request to re-establish, the United States National Space Council following the near 25 year gap after it originally ceased operating in 1993.
During the meeting, Vice-President Mike Pence reported that the US have fallen behind other nations around the globe, and have lacked focus with future and current space efforts. In his address, he outlined that America needs to continue with its work to get a human presence in orbit, and once that work has been established, future investigations need to take place into our celestial neighbours.
The US want to see American astronauts back in space, and involved with future moon landings, and building foundations towards space expeditions to Mars and further afield.
The idea is to use new landings on the moon as a “training ground facility”, whilst strengthening the USA’s commercial and international partnerships in innovation and research.
The meeting also highlighted the national security aspect, with Pence commenting on the fact that that Russia and China have pursued a range of anti-satellite technologies, that can reduce the US Military’s efficiency and increasethe potential possibility of attacks on the USA’s satellite system which form part of their ‘future warfare’ policies.
Pence has given the members of the council a 45-day timeframe regarding the recommendations and proposals to be made to the president. He stated that “America must be dominant in space, as they are on Earth.”
In response to the Council’s work, NASA has saluted the council. Robert Lightfoot, Acting Administrator from NASA remarked, “It builds on the hard work we have already been doing on the Space Launch System rocket and Orion spacecraft, our efforts to enable our commercial partners and work with our international partners in Low Earth Orbit at the International Space Station and what we have been learning from our current robotic presence at the Moon and Mars.”
NASA are keen to continue working with current and international industries in relation to the robotic space landers exploring the moons landscape and it’s natural resources.
The use of origami shaped spacecraft could be used to design future space rockets. NASA engineers have been fascinated by the way that so much attention to detail can be held in such a small volume of space, and has given them new inspiration at NASA’s Jet Propulsion Laboratory in Pasadena, California.
The notion that the ancient Japanese art of folding paper could be used has been realised in a project dubbed “Starshade”. The purpose of Starshade would be to block the light from stars in deep space. While orbiting the Earth, it would change its shape by unfolding to an approximate diameter of 26 meters (85 feet). Put in perspective, this is about the same size as a Baseball Pitch. By blocking the light from the stars, the capability of a space telescope could be enhanced when working to detect exoplanets (a planet that orbits a star outside our solar system).
The Wide Field Infrared Survey Telescope is currently being considered to be used with Starshade. The current plans involve using a purpose-built coronagraph (an instrument that blocks out light emitted by the sun’s actual surface so that the corona can be observed), and to use images of larger planets that orbit other stars, enabling the combination of the telescope and Starshade to detect much smaller planets. Unfortunately, due to its delicate construction, Starshade would be at risk of any micrometeorite showers because of its size, and could suffer damage through puncturing if caught by a strike. If this was to occur, light would then be able to get through, and the effectiveness of the telescope reduced.
JPL were inspired to use an origami folding pattern to avoid any disruption to their observations. Manan Arya, Technologist at Starshade, explained that by using a multiple layer of materials to block any starlight, and the introduction of gaps to the design, in the event of being caught in any micrometeorite showers the probability of a direct hit is reduced significantly. It was vital to developing a system that allowed smooth, predictable folds simultaneously.
He was inspired by the history of “Space Folding”. Echo 1, launched in the 1960’s was an Earth orbiting balloon that, prior to launch was crammed into spherical canister with a 26” diameter. It was highly visible from the ground when it reached orbit.
Robert Salazar, and Intern at JPL assisted in the design of Starshade’s folding pattern. He now continues his experimental folding, on an a concept called “Transformers for Lunar Extreme Environments”. Leading the project is Senior Research Scientists, Adrian Stoica. The transformers will use reflective mirrors that unfold with a view to “bouncing” the rays of the sun towards large craters upon the Moon. Following the deployment of light, the solar energy could then be used to melt iced water upon the surface where it may exist, or provide power for machinery.
Working areas at JPL can often been seen littered with scraps of paper that have been folded into designs. Designs have also been made with Kapton, a tinsel like material that is currently used as insulation within spacecraft, along with a polyethylene fabric which, when used doesn’t leave a crease after folding.
Salazar admitted that when it came to origami, “The magic comes from the folding”. By using various materials, you learn to understand how materials fold, without relying on designs made solely from geometrical sums. Salazar has been using origami for over 17 years, not just for space engineering. His original designs included animals – in more recent years he still makes figures of endangered species and donates his creations to wildlife conservancies.
Using origami in engineering is still a new concept, and could be used by many fields of engineers and designers. With so many avenues still yet to be explored, particularly when designing structures that are not flat, for example spheres and paraboloids (a solid shape having two or more non-parallel parabolic cross section), the possibilities are exciting.
Although the Starshade and Transformer projects are still in their early stages, the use of origami in space could become more widely used in the near future, with NASA hoping to launch key missions using modular spacecraft in the next few years called ‘Cubesats’. These small structures are the size of a briefcase, cost relatively little to manufacture and are easier to launch because of their size. Although Cubesats are limited in their operation and abilities, origami design innovations could be used to improve their effectiveness.
JPL have also used origami in robotic applications, for a robot called “Puffer”. With a body that can collapse, it is manufactured from a circuit board that folds, and is embedded within fabric. The robot has the ability to climb over rocks, squeeze under ledges and other terrain.
In July this year, NASA called for engineers and designers to figure out a way of creating a radiation shielding system, using origami only designs. Origami seems to have a lot to offer, and shows another way forward when it comes to the future of space exploration.