Hello Mi Fans,|
today lets learn something about Electronic image stabilization and AI-based computing engine
Electronic Image Stabilization(EIS)
Electronic image stabilization (EIS) is an image enhancement technique using electronic processing. EIS minimizes blurring and compensates for device shake, often a camera. More technically, this technique is referred to as pan and slant, which is the angular movement corresponding to pitch and yaw.
The EIS technique may be applied to image-stabilized binoculars, still/video cameras, and telescopes.
EIS corrects the device shaking, normally resulting in noticeable image jittering within each frame of video or each still image. Camera shaking is particularly tricky with still cameras, especially when using slow shutter speeds and/or telephoto lenses. Telescopic lens-shake issues in astronomy accumulate depending on gradual atmospheric variations, which invariably lead to visibly altered object positions.
EIS cannot prevent blur from subject movement or extreme camera shaking, but it is engineered to minimize blur from normal handheld lens shaking. Certain cameras and lenses are built with more aggressive active modes and/or secondary panning features.
This technology was used in Two Xiaomi device
1. Mi A1 "the father of dual camera in India."
2. Redmi Note 5 Pro " the camera beast of India"
AI-based computing engine
Artificial intelligence (AI) is everywhere, and if you haven't yet got an AI-powered smartphone, you probably soon will do. Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in smartphones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos.
AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 2 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.”
Computational photography is a digital image processing technique that uses algorithms to replace optical processes, and it seeks to improve image quality by using machine vision to identify the content of an image. “It's about taking studio effects that you achieve with Lightroom and Photoshop and making them accessible to people at the click of a button,” says Simon Fitzpatrick, Senior Director, Product Management at FotoNation, which provides much of the computational technology to camera brands. “So you're able to smooth the skin and get rid of blemishes, but not just by blurring it – you also get texture.” In the past, the technology behind ‘smooth skin’ and ‘beauty’ modes has essentially been about blurring the image to hide imperfections. “Now it’s about creating looks that are believable, and AI plays a key role in that,” says Fitzpatrick. “For example, we use AI to train algorithms about the features of people's faces.”In recent years we've seen many dual-lens phone cameras use two lenses to produce aesthetically pleasing images that have a blurry background around the main subject. People (and, therefore, Instagram) love blurry backgrounds, but instead of using dual-lens cameras or picking up a DSLR and manually manipulating the depth of field, AI can now do it for you. Commonly called the 'bokeh' effect (Japanese for blur), machine learning identifies the subject, and blurs the rest of the image. “We can now simulate bokeh using AI-based algorithms that segment people from foreground and background, so that we can create an effect that begins to look very much like a portrait taken in a studio,” says Fitzpatrick. The latest smartphones allow you to do this for photos taken with either the rear or the front (selfie) camera.
“People refer to it as bokeh, but you don’t get the true blur you get with a DSLR where you can change the depth; with a phone, you can only blur the background,” says Gill. “But a small and growing number of photographers are really impressed with it and are using an iPhone X for everyday capture, and only when they’re on professional jobs will they get out their DSLR.”
If we take some example Google Pixel 2 is one of the devcie which use AI computing engine to generate a good selfie..
We have used the same technology in
In order to fulfill the basic functions of our service, the user hereby agrees to allow Xiaomi to collect, process and use personal information which shall include but not be limited to written threads, pictures, comments, replies in the Mi Community, and relevant data types listed in Xiaomi's Private Policy. By selecting "Agree", you agree to Xiaomi's Private Policy and Content Policy .