Web Of Technopolis
Wednesday, 13 January 2016
Slide N Joy
Just plug the second USB in your laptop and wow…..
Now, it is connected. It works well
Labels:
Slide N Joy,
triple screen
Monday, 19 October 2015
World Lens
Imagine that you are on tour. You don't understand the language used there. So it's very difficult to understand the language written. The "World Lens" is the best option to be used. It instantly recognizes foreign text like that on signboard and menu cards and translate them to English.
It is developed by group of American and Brazilian engineers. It operates on the in-build cameras of smartphones or similar devices.
Word Lens used the built-in cameras on smartphones and similar devices to quickly scan and identify foreign text (such as that found in a sign or a menu), and then translate and display the words in another language on the device's display. The words were displayed in the original context on the original background, and the translation was performed in real-time without connection to the internet. For example, using the viewfinder of a camera to show a shop sign on a smartphone's display would result in a real-time image of the shop sign being displayed, but the words shown on the sign would be the translated words instead of the original foreign words.
At launch, this Word Lens feature support a limited number of languages (English to and from French, German, Italian, Portuguese, Russian and Spanish) with additional languages expected in the future. It also will work even when there is no available Internet connection.
Google has also added a new real-time conversation mode that is available for the first time on the iOS platform. Previously incorporated into Android, this real-time mode improves the flow of a conversation by automatically detecting the languages being used by the participants. Once language identification is complete, users can speak at a natural pace without needing to tap the mic between each side of the conversation.
Just have a look
It is developed by group of American and Brazilian engineers. It operates on the in-build cameras of smartphones or similar devices.
Word Lens used the built-in cameras on smartphones and similar devices to quickly scan and identify foreign text (such as that found in a sign or a menu), and then translate and display the words in another language on the device's display. The words were displayed in the original context on the original background, and the translation was performed in real-time without connection to the internet. For example, using the viewfinder of a camera to show a shop sign on a smartphone's display would result in a real-time image of the shop sign being displayed, but the words shown on the sign would be the translated words instead of the original foreign words.
At launch, this Word Lens feature support a limited number of languages (English to and from French, German, Italian, Portuguese, Russian and Spanish) with additional languages expected in the future. It also will work even when there is no available Internet connection.
Google has also added a new real-time conversation mode that is available for the first time on the iOS platform. Previously incorporated into Android, this real-time mode improves the flow of a conversation by automatically detecting the languages being used by the participants. Once language identification is complete, users can speak at a natural pace without needing to tap the mic between each side of the conversation.
Just have a look
Monday, 5 October 2015
Gloveone : Virtual Reality
As we all know that virtual reality can
be sensed through sight and sound. But through Glove One you can touch virtual
objects. Yes, the wearers will be able to feel rain and fire and also
something as butterfly wings or letting you fire a gun or grab an apple.
Glove one is gloves from Spain-based
tech company NeuroDigital Technologies, will let you actually feel sensations
like shape and weight when interacting with virtual objects.
Glove One is
a wearable mobile communications device created by Bryan Cera, a student of the University of Wisconsin-Milwaukee.
The technology translates touch sensations into vibrations
and each pair has 10 sensors placed in the palm and fingertips. Four sensors
located in the palm, thumb, index and middle fingers, can detect each other.
The sensation of wearing these gloves as a “realistic
perceptual illusion”, meaning that you can’t feel the weight of a virtual
object the same as it is in real life, but you can compare weights within the
virtual world.
It is available in three size. It features a Bluetooth
wireless connection and offers more than four hours of battery life. But for
this you will have to wait till 2016 before you enjoy the experience through Glove one.
Just have a look on video of Glove one: Virtual Reality.
Saturday, 19 September 2015
Leap Motion
Multi-touch desktop is a failed product due to the fact that hands could get very tired with prolonged use. Leap motion
wants to challenge this dark area again with a more advanced idea. It
lets you control the desktop with fingers, without touching the
screen.
Leap Motion is a company which is founded in 2010. This company manufactures and markets a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching.
It’s not your typical motion sensor, as Leap Motion allows you to scroll the web page, zoom in the map and photos, sign documents and even play a first person shooter game with only hand and finger movements. The smooth reaction is the most crucial key point here. More importantly, you can own this future with just $70.
To know more about leap motion :
https://www.youtube.com/watch?v=_d6KuiuteIA
Leap Motion is a company which is founded in 2010. This company manufactures and markets a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching.
It’s not your typical motion sensor, as Leap Motion allows you to scroll the web page, zoom in the map and photos, sign documents and even play a first person shooter game with only hand and finger movements. The smooth reaction is the most crucial key point here. More importantly, you can own this future with just $70.
To know more about leap motion :
https://www.youtube.com/watch?v=_d6KuiuteIA
Labels:
leap motion
Tuesday, 1 September 2015
Eye Tribe : Improved Approach to Eye Tracking
Eye tracking is measurement of eye activity. An eye tracker is a device for measuring eye positions and eye movement. It is human computer interaction and is used in research on visual system.
In other words, Eye tracking is the process of using sensors to locate features of the eyes and estimate where someone is looking (point of gaze).
History of Eye Tracking :
1870's : Scientific study of eye tracking began
1900's : Photographic improvement
1940's : Head mounted eye trackers first developed
1970's : High speed data processing and cognitive science
1980's : Human Computer Interaction developed
1990's : Commercial Application made practical
Tracker Type:
Eye trackers measure rotations of the eye in one of several ways and they are as follows :
Eye Tribe :
The Eye Tribe is a Danish startup company that produces eye tracking technology and is selling it to software developers for them to incorporate the eye tracking device into their applications and programs. The Eye Tribe's software allows a user to direct a smart phone, tablet, or computer with just the look of an eye.
It enables eye control on computer with hand-free navigation of apps and websites. It gives best gaming experience.
A live demo was done in LeWeb this year and we may actually be able to see it in in action in mobile devices in 2013.
Smallest eye tracking device in the world is 20 x 1.9 x 1.9 cm.
Component of Eye Tribe tracker :
The only disadvantage of Eye tribe is it can't work with equipment i.e. contact lens or long eye lashes.
The main advantage of Eye tribe is it's a boon for physically disable person.
To know more : https://www.youtube.com/watch?v=2q9DarPET0o
In other words, Eye tracking is the process of using sensors to locate features of the eyes and estimate where someone is looking (point of gaze).
Fig : Eye Tracking Devices |
History of Eye Tracking :
1870's : Scientific study of eye tracking began
1900's : Photographic improvement
1940's : Head mounted eye trackers first developed
1970's : High speed data processing and cognitive science
1980's : Human Computer Interaction developed
1990's : Commercial Application made practical
Tracker Type:
Eye trackers measure rotations of the eye in one of several ways and they are as follows :
- Eye attached tracking
- Optical tracking
- Electric potential Measurement
Fig : Gaze Plot |
Eye Tribe :
The Eye Tribe is a Danish startup company that produces eye tracking technology and is selling it to software developers for them to incorporate the eye tracking device into their applications and programs. The Eye Tribe's software allows a user to direct a smart phone, tablet, or computer with just the look of an eye.
It enables eye control on computer with hand-free navigation of apps and websites. It gives best gaming experience.
A live demo was done in LeWeb this year and we may actually be able to see it in in action in mobile devices in 2013.
Smallest eye tracking device in the world is 20 x 1.9 x 1.9 cm.
Component of Eye Tribe tracker :
- Camera
- High resolution Infrared LED
- 3.0 USB connection, which allows it to run with most computers and tablets.
The only disadvantage of Eye tribe is it can't work with equipment i.e. contact lens or long eye lashes.
The main advantage of Eye tribe is it's a boon for physically disable person.
To know more : https://www.youtube.com/watch?v=2q9DarPET0o
Labels:
eye tracking,
eye tribe,
gaze,
LeWeb
Sunday, 30 August 2015
Fog Computing
Hi friends, we all know what cloud computing is. Cloud Computing is the practice of using a network of remote servers hosted on the
Internet to store, manage, and process data, rather than a local server
or a personal computer.
What is Fog Computing?
Fog is simply a cloud that is close to the ground. Fog computing is the extension of cloud computing.This term of Fog Computing is introduced by Cisco in 2014. This term can also be called as Fogging or Edge Computing. With this the operation gets easy to compute, storage and networking services between end devices and cloud computing data centers.
It has distributed infrastructure in which some application services are handled at the network edge in a smart device and some application services are handled in a remote data center in the cloud.It's processing takes place on data hub on mobile device or in router. It is simply inefficient to transmit all the data a bundle of sensors creates to the cloud for processing and analysis.
Cloud computing has become the buzz word during the recent years. But it largely depends on servers which are available in a remote location, resulting in slow response time and also scalability issues. Response time and scalability plays a crucial role in machine to machine communication and services. The edge computing platform solves the problems by the simple idea of locating small servers called edges servers in the vicinity of the users and devices and passing to the servers some of the load of center servers and/or user’s devices.
Goal is to improve efficiency and reduce the amount of data that needs to be transported to the cloud for data processing, storage.
Security Issue :
The main security issue are authentication at different level of gateway as well as smart meter installed in consumer's home. A user can either tamper with its own smart meter reporting false reading or spoff IP address.
Privacy Issue :
In smart grid, privacy issue deals with hiding details such as what appliance was used at what time, while allowing correct summary of information.
Difference between cloud computing and fog computing :
Fog computing is processing and applications are concentrated in devices at the network edge rather than transfer to cloud for processing.So all processing is done at smart devices in the network not in the cloud.
In Mobile cloud computing the mobile devices and cloud computing combine to create a new infrastructure and data processing and data storage are outside of mobile devices (at the cloud).
Applications :
What is Fog Computing?
Fog is simply a cloud that is close to the ground. Fog computing is the extension of cloud computing.This term of Fog Computing is introduced by Cisco in 2014. This term can also be called as Fogging or Edge Computing. With this the operation gets easy to compute, storage and networking services between end devices and cloud computing data centers.
It has distributed infrastructure in which some application services are handled at the network edge in a smart device and some application services are handled in a remote data center in the cloud.It's processing takes place on data hub on mobile device or in router. It is simply inefficient to transmit all the data a bundle of sensors creates to the cloud for processing and analysis.
Cloud computing has become the buzz word during the recent years. But it largely depends on servers which are available in a remote location, resulting in slow response time and also scalability issues. Response time and scalability plays a crucial role in machine to machine communication and services. The edge computing platform solves the problems by the simple idea of locating small servers called edges servers in the vicinity of the users and devices and passing to the servers some of the load of center servers and/or user’s devices.
Goal is to improve efficiency and reduce the amount of data that needs to be transported to the cloud for data processing, storage.
Security Issue :
The main security issue are authentication at different level of gateway as well as smart meter installed in consumer's home. A user can either tamper with its own smart meter reporting false reading or spoff IP address.
Privacy Issue :
In smart grid, privacy issue deals with hiding details such as what appliance was used at what time, while allowing correct summary of information.
Difference between cloud computing and fog computing :
Fog computing is processing and applications are concentrated in devices at the network edge rather than transfer to cloud for processing.So all processing is done at smart devices in the network not in the cloud.
In Mobile cloud computing the mobile devices and cloud computing combine to create a new infrastructure and data processing and data storage are outside of mobile devices (at the cloud).
Applications :
- Connected cars: It’s ideal for connected cars, because real-time interactions will make communications between cars, access points and traffic lights as safe and efficient as possible.
- Smart grids: Allows fast, Machine-To-Machine (M2M) handshakes and Human to Machine Interactions (HMI), which would work in cooperation with the cloud.
- Smart cities: Fog computing would be able to obtain sensor data on all levels of the activities of cities, and integrate all the mutually independent network entities within.
- Healthcare: The cloud computing market for healthcare is expected to reach $5.4 billion by 2017, according to a Markets and Markets report, and fog computing would allow this on a more localized level.
Labels:
Cloud,
Fag Computing,
Fogging
Tuesday, 25 August 2015
Capacitive Touch Communication
Hi friends, imagine that your smartphone,tablet,iPad sense your touch and do not understand the instruction given by other unauthorized user. Wow what a brilliant idea!!!
Yes, it is possible by this newly emerging technology "Capacitive Touch Communication". In this technology the device handled not only sense the authorize touch but if the stranger handles the device it gives signal to the user on wearable device (eg : ring, watch, etc).
It would be of great benefit for the device to know who is interacting with it and occasionally to the authenticate the user. This improves human-computer interface and also improves users experience.The important data like e-mails and personal photos, pay bills, and transfer funds between our bank accounts in the device remains safe from strangers.It saves time and gets easy for finding traces of the person.It is bit costly as it also require wearable device other than mobile device.
There are four different technologies used to make touch screens:
Yes, it is possible by this newly emerging technology "Capacitive Touch Communication". In this technology the device handled not only sense the authorize touch but if the stranger handles the device it gives signal to the user on wearable device (eg : ring, watch, etc).
It would be of great benefit for the device to know who is interacting with it and occasionally to the authenticate the user. This improves human-computer interface and also improves users experience.The important data like e-mails and personal photos, pay bills, and transfer funds between our bank accounts in the device remains safe from strangers.It saves time and gets easy for finding traces of the person.It is bit costly as it also require wearable device other than mobile device.
There are four different technologies used to make touch screens:
- Resistive
- Capacitive
- Surface Acoustic Wave(SAW)
- Infrared LED or Optical
A
capacitive screen in most commercial tablets and smartphones consists of an
array of conducting electrodes behind a transparent, insulating glass layer
which detects a touch by measuring the additional capacitance of a human body
in the circuit.
Fig. 1. Schematic of a basic capacitive touch screen. |
Fig. 2.Internal touch detection circuit |
This is a system for
using a user device to communicate with touch screen of an electronic device
generating electronic circuit of user device, a signal by encoding data sequence
that is stored in memory of user device.
Architecture:
Fig. 3. Overall architecture of the capacitive touch communication |
This technology if very friendly,fast,accurate & easy to operate. Further this can be adopted by computer since it is innovative.
Subscribe to:
Posts (Atom)