Assignment - 4 SUMMER-MAY - (2018) : Q4. List and Explain Various Interaction Devices Used in Developing An Interface

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

ASSIGNMENT - 4

SUMMER-MAY-(2018)

Q4. List and explain various interaction devices used in developing an


interface.
Ans. Several interactive devices are used for the human computer interaction. Some of
them are known tools and some are recently developed or are a concept to be
developed in the future. In this chapter, we will discuss on some new and old interactive
devices.
Touch Screen
 The idea of a touch screen was first described and published by E.A. Johnson
in 1965. In the early 1970s, the first touch screen was developed
by CERN engineers Frank Beck and Bent Stumpe. The physical product was first
created and utilized in 1973. The first resistive touch screen was developed by
George Samuel Hurst in 1975 but wasn't produced and used until 1982.
 A touch screen is a display device that allows the user to interact with a
computer by using their finger or stylus. They can be a useful alternative to a
mouse or keyboard for navigating a GUI (graphical user interface). Touch screens
are used on a variety of devices, such as computer and laptop
displays, smartphones, tablets, cash registers, and information kiosks. Some
touch screens use a grid of infrared beams to sense the presence of a finger
instead of utilizing touch-sensitive input.
Gesture Recognition

 Gesture recognition is an active research field which tries to integrate the


gestural channel in Human Computer Interaction.
 It has applications in:
1. Virtual environment control.
2. Sign language translation.
3. Robot remote control.
4. Musical creation.
 Recognition of human gestures comes within the more general framework of
pattern recognition. In this framework, systems consist of two processes: the
representation and the decision processes. The representation process converts
the raw numerical data into a form adapted to the decision process which then
classifies the data.
 Gesture recognition systems inherit this structure and have two more processes:
the acquisition process, which converts the physical gesture to numerical data,
and the interpretation process, which gives the meaning of the symbol series
coming from the decision process.
Fig. General structure of a gesture recognition system.

This new technology magnitudes an advanced association between human and


computer where no mechanical devices are used. This new interactive device might
terminate the old devices like keyboards and is also heavy on new devices like touch
screens.

Speech Recognition

Speech recognition can be considered a specific use case of the acoustic channel. The car
is a challenging environment to deploy speech recognition. A well-developed speech
recognition system should cope with the noise coming from the car, the road, and the
entertainment system, and include the following characteristics:
 The microphone should be pointed at the driver position. This assures that
the incoming signal of the speech is as high as possible.
 Push to talk button. The speech interaction is initiated by the driver by means of
pushing a button. Nevertheless, other occupants or voices from the
entertainment devices can initiate any other action, with the proper command.
 The entertainment system should be muted in order to avoid interaction with
the speech recognition system.
The car is a noisy environment, even when applying these basic techniques. Different
driving speeds, varying road conditions, wipers, and air-conditioning are examples of
noise sources. For this reason, the speech recognition system should include a noise
cancellation algorithm.
Nowadays, the voice interface is mainly limited to the interpretation of several specific
commands. The current challenge is to go a step forward in the use of the natural
language. So far some devices appeared, although they are not globally spread out yet.

Keyboard
A keyboard can be considered as a primitive device known to all of us today. Keyboard
uses an organization of keys/buttons that serves as a mechanical device for a computer.
Each key in a keyboard corresponds to a single written symbol or character.
This is the most effective and ancient interactive device between man and machine that
has given ideas to develop many more interactive devices as well as has made
advancements in itself such as soft screen keyboards for computers and mobile phones.
Response Time
Response time is the time taken by a device to respond to a request. The request can be
anything from a database query to loading a web page. The response time is the sum of
the service time and wait time. Transmission time becomes a part of the response time
when the response has to travel over a network.
In modern HCI devices, there are several applications installed and most of them
function simultaneously or as per the user’s usage. This makes a busier response time.
All of that increase in the response time is caused by increase in the wait time. The wait
time is due to the running of the requests and the queue of requests following it.
So, it is significant that the response time of a device is faster for which advanced
processors are used in modern devices.

(OR)
Q. Describe in detail components of navigation system.
Ans. A Satellite navigation device, colloquially called a GPS receiver, or simply a GPS,
is a device that is capable of receiving information from GNSS satellites and then to
calculate the device's geographical position. Using suitable software, the device may
display the position on a map, and it may offer routing directions. The Global
Positioning System (GPS) is one of a handful of global navigation satellite
systems (GNSS) made up of a network of a minimum of 24, but currently 30,
satellites placed into orbit by the U.S. Department of Defense.
GPS was originally developed for use by the United States military, but in the 1980s, the
United States government allowed the system to be used for civilian purposes. Though
the GPS satellite data is free and works anywhere in the world, the GPS device and the
associated software must be bought or rented.
A satellite navigation device can retrieve (from one or more satellite systems) location
and time information in all weather conditions, anywhere on or near the Earth. GPS
reception requires an unobstructed line of sight to four or more GPS satellites, and is
subject to poor satellite signal conditions. In exceptionally poor signal conditions, for
example in urban areas, satellite signals may exhibit multipath propagation where
signals bounce off structures, or are weakened by meteorological conditions. Obstructed
lines of sight may arise from a tree canopy or inside a structure, such as in a building,
garage or tunnel. Today, most standalone GPS receivers are used in automobiles. The
GPS capability of smartphones may use assisted GPS (A-GPS) technology, which can use
the base station or cell towers to provide a faster Time to First Fix (TTFF), especially
when GPS signals are poor or unavailable. However, the mobile network part of the A-
GPS technology would not be available when the smartphone is outside the range of the
mobile reception network, while the GPS aspect would otherwise continue to be
available.
The Russian Global Navigation Satellite System (GLONASS) was developed
contemporaneously with GPS, but suffered from incomplete coverage of the globe until
the mid-2000s. GLONASS can be added to GPS devices to make more satellites available
and enabling positions to be fixed more quickly and accurately, to within 2 meters.
Other satellite navigation services with (intended) global coverage are the
European Galileo and the Chinese BeiDou.
WINTER-DEC-2018

Q.4 Describe in detail direct control pointing devices and indirect


control pointing devices.
Ans. Indirect and direct input refers to how data or commands are entered into a
system. Indirect devices translate some action of the human body into data. Examples
include a computer mouse, a rotary encoder (containing a knob for movement and a
button for activation), or a joystick. Although these devices have different physical
attributes they share the cognitive commonality of mental translation between the
human body and the machine. For example, moving a mouse forward moves a
cursor upward on a screen. The spatial translation required has been shown to be
cognitively demanding, particularly for older adults experiencing normal age-related
decline in spatial ability. Mental translation is also involved in the amount of gain
offered by an indirect device; a small movement with a device may produce a large
movement on a screen and vice versa. The user must translate the physical distance
moved to the virtual distance moved and such translation affects performance and
perhaps attentional requirements. Yet it is because of this translation that indirect
devices can offer great precision for on-screen tasks.
Direct devices have no intermediary; the movement of the body equals the input to the
machine. Examples of direct devices are touch screens, light pens, and voice recognition
systems. Direct devices do not require conscious mental translation; the movement
effort matches the display distance and performance may be predicted by Fitts’ Law
type functions. For older users, the directness of operation can result in faster
acquisition, operation, and accuracy with the interface. Other benefits include the
option for ballistic movement. Direct devices do not necessarily produce unilaterally
better performance; they can cause performance difficulties for some input tasks due to
fatigue, accidental activation, or a lack of precision.
One might conclude  that indirect devices should be more attention demanding than
direct devices due to the translation required. Indeed, it has been implied that direct
devices may “involve less cognitive processing than the actions required with the
keyboard and mouse”. Differential cognitive demands were implicated in a study
by wherein they found varied performance on a digit-span test across input device
types. However, the findings were mixed and attention was not systematically varied
across the tasks.
Thus there is conjecture and limited evidence that indirect devices are more attention
demanding than direct devices. However, attention demands have not been
systematically investigated to determine qualitative and quantitative performance
changes as attention is withdrawn from the task. Moreover, this issue has not been
addressed in the context of other relevant variables such as the task demands or the age
of the user.
(OR)
Q. Describe in detail the importance of specification methods in
selection of interface design tools.
Ans. A user interface specification (UI specification) is a document that captures the
details of the software user interface into a written document. The specification covers
all possible actions that an end user may perform and all visual, auditory and other
interaction elements.
The UI specification is the main source of implementation information for how the
software should work. Beyond implementation, a UI specification should consider
usability, localization, and demo limits. As future designers might continue or build on
top of existing work, a UI specification should consider forward
compatibility constraints in order to assist the implementation team.
The UI specification can be regarded as the document that bridges the gap between the
product management functions and implementation. One of the main purposes of a UI
specification is to process the product requirements into a more detailed format. The
level of detail and document type varies depending the needs and design practices of
the organizations. The small scale prototypes might require only modest documentation
with high-level details.
In general, the goal of requirement specifications are to describe what a product is
capable of, whereas the UI specification details how these requirements are
implemented in practice.
Having a formal structure for a UI specification will help readers anticipate where they
can find the needed information to interpret the specifications correctly. Example
structure of the UI specification may contain, but not limited to, following items:
 Change history
 Open issues
 Logical flow
 Display descriptions
 Error and exception cases
Before UI specification is created, a lot of work is done already for defining the
application and desired functionality. Usually there are requirements for the software
which are basis for the use case creation and use case prioritizing. UI specification is
only as good as the process by which it has been created, so lets consider the steps in
the process:
 Use case definition
Use cases are then used as basis for drafting the UI concept (which can contain
for example main views of the software, some textual explanations about the
views and logical flows), these are short stories that explain how the end user
starts and completes a specific task, but not about how to implement it.
The purpose of writing use cases is to enhance the UI designer understanding of
the features that the product must have and of the actions that take place when
the user interacts with the product.
 Design draft creation
The UI design draft is done on the basis of the use case analysis. The purpose of
the UI design draft is to show the design proposed, and to explain how the user
interface enables the user to complete the main use cases, without going into
details.
It should be as visual as possible and all the material created must be in such a
format that it can be used in the final UI specification. (This is good time to
conduct usability testing or expert evaluations and make changes.)
 Writing the user interface specification.
The UI specification is then written to describe the UI concept. The UI
specification can be seen as an extension of the design draft that provides a
complete description that contains all details, exceptions, error cases,
notifications, and so forth. The amount of detail provided depends on the needs
and characteristics of the development organization (scope of the product,
culture of the organization, and development methodology used, among others).
Usually, the UI concept and specifications are reviewed by the stakeholders to
ensure that all necessary details are in place.

You might also like