Multimedia Systems Lab Manual
Multimedia Systems Lab Manual
Multimedia Systems Lab Manual
Wondim D
School of Computing, Bahir Dar Institute of
Technology-BDU
3/9/2015
BAHIR DAR UNIVERSITY
BAHIR DAR INSTITUTE OF TECHNOLOGY
School of Computing
Here I have certified and approved that the Multimedia Systems laboratory manual prepared by
Wondim Dessiye and as I have reviewed and commented on the manual; it contains, carry out
the objectives, completes practical part of the course and the manual fulfills the standard of the
2|Page
Contents
1. Digital Images ........................................................................................................................................ 5
1.1. Introduction ...................................................................................................................................... 5
1.2. Reading, Writing and Displaying Digital Images ............................................................................... 5
1.2.1. Reading Digital Images .................................................................................................................. 5
1.2.2. Drawing/Displaying Digital Images ............................................................................................... 6
1.2.3. Creating Digital Image ................................................................................................................... 7
1.2.4. Writing Digital Images ................................................................................................................... 7
1.2.5. Experiment 1: Reading, Displaying, and Writing Digital Images ................................................... 7
1.3. Manipulating Pixels ........................................................................................................................... 8
1.3.1. Experiment 2: Manipulating Pixels (I) ........................................................................................... 8
1.3.2. Experiment 3: Manipulating Pixels (II) .......................................................................................... 9
1.4. Creating your own Image Format ................................................................................................... 10
1.4.1. Bahir Dar Pictures (BDP) ............................................................................................................. 10
1.4.2. Experiment 4: Creating Image Formats ...................................................................................... 12
2. Color Models in Image and Video ....................................................................................................... 13
2.1. Introduction .................................................................................................................................... 13
2.2. Experiment 5: Working on Color Models........................................................................................ 13
3. Audio ................................................................................................................................................... 14
3.1. Introduction .................................................................................................................................... 14
3.2. Reading and Writing Sound Files .................................................................................................... 15
3.3. Converting Audio Data Formats...................................................................................................... 16
3.4. Experiment 6: Reading, writing and converting Audio Data .......................................................... 16
3.5. Playing Back and Recording Audio Data ......................................................................................... 17
3.6. Experiment 7: Playing and Recording Audio Data ......................................................................... 18
4. Video ................................................................................................................................................... 19
4.1. Introduction .................................................................................................................................... 19
4.2. Experiment 8: Playing a Movie Using JMF ..................................................................................... 20
4.3. Experiment 9: Capturing Video from Webcam .............................................................................. 21
5. Image Compression............................................................................................................................. 23
3|Page
5.1. Introduction .................................................................................................................................... 23
5.2. Experiment 10: Lossless Compression Techniques ......................................................................... 23
5.3. Experiment 11: Lossy Compression Techniques ............................................................................. 25
6. Animation............................................................................................................................................ 26
6.1. Introduction to Macromedia Flash ................................................................................................. 26
6.2. Using the Drawing Tools ................................................................................................................. 27
6.3. Working with Layers ....................................................................................................................... 28
6.4. Working with the Timeline.............................................................................................................. 29
6.5. Creating Animations........................................................................................................................ 30
6.6. Publishing and Exporting ................................................................................................................ 33
6.7. Experiment 12: Animation Basics (I) ............................................................................................... 34
6.8. Experiment 13: Animation Basics (I) ............................................................................................... 34
4|Page
1. Digital Images
1.1. Introduction
An image is typically a rectangular two-dimensional array of pixels, where each pixel represents
the color at that position of the image and where the dimensions represent the horizontal extent
(width) and vertical extent (height) of the image as it is displayed.
Below are the main classes that you must learn about to work with images:
External image formats are loaded into BufferedImage format using the javax.imageio.ImageIO
class. ImageIO class has built-in support for GIF, PNG, JPEG, BMP, and WBMP.
The following code shows how to load an image from a specific file:
5|Page
Note: Image I/O recognizes the contents of the file as a JPEG format image, and decodes it into a
BufferedImage which can be directly used by Java 2D.
An image can be drawn using methods of the Graphics or Graphics2D classes. An instance of
Graphics or Graphics2D is known as a graphics context. It represents a surface onto which we
can draw images, text or other graphics primitives.
A graphics context could be associated with an output device such as a printer, or it could be
derived from another image (allowing us to draw images inside other images); however, it is
typically associated with a GUI component that is to be displayed on the screen.
For example, to display an image using the Abstract Window Toolkit (AWT), we must extend
an existing AWT component and override its paint () method. In very simple applets or
applications, extending Applet or Frame would be sufficient.
The general syntax for calling method drawImage of class Graphics is given below:
Where x and y specifies the position for the top-left of the image. The observer parameter
notifies the application of updates to an image that is loaded asynchronously. The observer
parameter is not needed for the BufferedImage class, so it usually is null.
6|Page
1.2.3. Creating Digital Image
We already know how to load an existing image, which was created and stored in your system or
in any network location. But, you probably would like also to create a new image as a pixel data
buffer.
You can create a BufferedImage object manually, using one of the three constructors of this
class as follows:
The ImageIO class provides a simple way to save images in a variety of image formats as shown
below: (Note: The BufferedImage class implements the RenderedImage interface).
try {
BufferedImage bimg = getMyImage();
File outputfile = new File("savedImage.png");
ImageIO.write(bimg, "png", outputfile);
} catch (IOException e) {
...
}
1. Write a program that loads an external jpeg image into a BufferedImage bimg.
2. Modify your program so that it displays width, height and type of the image. Use
getWidth(), getHeight(), and getType() methods of the BufferedImage.
3. Write a method that displays the image in Q(1) on a frame
4. Modify your program so that it displays two different images side by side
5. Write a method that writes the image in Q(1) in PNG format.
7|Page
1.3. Manipulating Pixels
Note that Raster is a read-only class; its methods can be used to inspect pixel values but not to
modify them. A subclass of Raster, called WritableRaster, adds methods that change a pixel's
value. The basic methods provided by WritableRaster to modify pixel values are given below:
1. Load a true color external image called “myImage.jpg” into a buffered image
8|Page
2. Display RGB values of the first row of the image starting from column 1 to 10
3. Replace the first and last two row of the image with red color
4. Invert the image data (upside-down) and store the inverted data in a new buffered image
called bimg2
5. Display both original and inverted images and observe their differences
6. Create a method that takes two buffered images as arguments and returns the average of
the two buffered images
7. Write a method that takes a buffered image as an argument, convert the buffered image
into grayscale and returns the converted image
5. Write a program that reads a color image from a JPEG file into a BufferedImage object
and then counts the number of pixels with a color similar to some reference color. This
reference color should be specified as red, green and blue values from the user interface.
'Similar' in this case means that the distance between a color and the reference color in
RGB space is less than 10. What happens when you attempt to run the program on a
grayscale image?
9|Page
1.4. Creating your own Image Format
In this section we will create a new image format called “BDP” which stands for Bahir Dar
Pictures. The BDP format supports 8-bit grayscale and 24-bit RGB color images, which may or
may not be compressed using a lossless compression technique.
A BDP format has a 12-byte. The first four bytes are the signature which indicates the image
type and compression status used in the image [Table 1.1]. This is followed by a pair of 32-bit
integers representing the width and height of the image, respectively. All remaining bytes in the
file are compressed or uncompressed image data. The design of the encoder and decoder of the
BDP format is shown in Figure 1.1
BDPEncoder BDPDecoder
- DataInputStream input
- DataOutputStream output
- byte[] signature
- boolean compression - int type
+ BDPEncoder() - int width
- int height
+ BDPEncoder(String fileName)
+ BDPDecoder()
+ void encode(BufferedImage img)
+ BDPDecoder(String fileName)
+ void enableCompression() + BufferedImage decode ()
+ void disableCompression() + int getType()
+ int getWidth()
+ int getHeight()
Figure 1.1: UML diagrams showing the design of BDPEncoder and BDPDecoder classes.
10 | P a g e
Example: Implementing the BDPEncoder class
public BDPEncoder(){ }
public BDPEncoder(String fileName){
output=new DataOutputStream(new FileOutputStream(fileName));
}
public void enableCompression() {
compression = true;
}
public void disableCompression() {
compression = false;
}
public void encode(BufferedImage img) {
writeHeader(img);
if (img.getType() == BufferedImage.TYPE_BYTE_GRAY ||
img.getType() == BufferedImage.TYPE_3BYTE_BGR) {
DataBufferByte db = (DataBufferByte)
img.getRaster().getDataBuffer();
byte[] data = db.getData();
if (compression) {
//will be implemented on its own topic (Compression)
}
else {
output.write(data);
output.flush();
}
}
else {
System.err.println("Unsupported file format");
}
}
private void writeHeader(BufferedImage img) {
if (img.getType() == BufferedImage.TYPE_BYTE_GRAY) {
if (compression)
output.write("gIMG".getBytes());
else
11 | P a g e
output.write("GIMG".getBytes());
}
else{
if(compression)
output.write("cIMG".getBytes());
else
output.write("CIMG".getBytes());
}
output.writeInt(img.getWidth());
output.writeInt(img.getHeight());
output.flush();
}
}
1. Implement the BDPDecoder class based on the UML given in Figure 1.1
12 | P a g e
2. Color Models in Image and Video
2.1. Introduction
There are different color models used in image and video. The most known color models are:
RGB, CMY, HSV, YIQ, and YCbCr.
1. Write a program to display colors based on the inputs of the underline color model
parameters (could be RGB)
2. Write a method for each of the following operations
2.1. convert RGB to CMYK and v-versa
2.2. convert RGB to YCbCr
2.3. convert HSV to RGB
13 | P a g e
3. Audio
3.1.Introduction
This section introduces the basic concepts of the JavaSound API and applies the API in a hands-
on example which enables you to play a background music file (such as an MP3), pick up the
microphone and start to sing along, recording your performance as an audio file, convert audio
data from one audio format/audio file format to other format, manipulate audio samples,
applying different effects such as, changing the volume, gain, sample rate etc on an audio data.
A JavaSound Primer
Java Sound API supports two types of audio: Sampled and MIDI(MIDI will not be covered in
this experiment). It is a low-level API for manipulation of audio playback, audio recording, and
MIDI music synthesizers. Low level, because you have direct access to the bits that represent the
audio data, and you can directly control many features of the underlying sound hardware.
The main classes related to sound files and sound data are given below
14 | P a g e
• Mixer: In Java Sound API a Mixer object represents either a hardware or a software
device. A mixer object can be used for input (capturing audio) or output (playing back
audio).
• In the case of input, the source from which the mixer gets audio for mixing is one
or more input ports. the mixer sends the captured and mixed audio streams to its
target, the target is an object with a buffer from which an application program can
retrieve this mixed audio data.
• In the case of audio output, the situation is reversed. The mixer's source for audio
is one or more objects containing buffers into which one or more application
programs write their sound data; and the mixer's target is one or more output
ports.
• Ports are simple lines for input or output of audio to or from audio devices. Common
types of ports are: microphone, line input, CD-ROM drive, speaker, headphone, and line
output.
• TargetDataLine receives audio data from a mixer. It provides methods for reading the
data from the target data line's buffer and determining how much data is currently
available for reading.
• SourceDataLine receives audio data for play back. It provides methods for writing data
to the source data line's buffer for playback, and determining how much data the line is
prepared to receive without blocking.
• Clip is a data line into which audio data can be loaded prior to playback.
The AudioSystem class provides two types of file-reading services using the methods:
getAudioFileFormat(InputStream/File /URL) and getAudioInputStream(InputStream/File /URL).
The following method, in class AudioSystem, creates a disk file of a specified file type:
write(AudioInputStream, AudioFileFormat.Type, File)
Example: The program below reads an audio file format and audio data from a given audio file.
import javax.sound.sampled.*;
import java.io.*;
public class AudioExample {
AudioInputStream audioIn;
15 | P a g e
AudioFileFormat fileFormat;
public void read(File file){
try{
fileFormat=AudioSystem.getAudioFileFormat(file);
audioIn=AudioSystem.getAudioInputStream(file);
}catch(Exception ex){
System.err.println(ex.getMessage());
}
}
}
To create a specific AudioFormat, we can use one of the two constructors of the AudioFormat
class shown below:
Example: A method that converts the data format of a given audio data
AudioInputStream lowResAIS;
public void convert() {
AudioFormat format=new AudioFormat(8000.0f,16,1,true, false);
lowResAIS =AudioSystem.getAudioInputStream(format, audioIn);
}
16 | P a g e
2. Write a method that creates a new audio file of type “AIFF” from the input audio file.
3. Write a method that converts the audio file in Q2 to a lower resolution
Two kinds of line that you can use for playing sound: a Clip and a SourceDataLine. Use a Clip
when you have non-real-time sound data that can be preloaded into memory. Use a
SourceDataLine for streaming data, such as a long sound file that won't all fit in memory at once,
or a sound whose data can't be known in advance of playback.
17 | P a g e
TargetDataLine targetLine ;
targetLine=AudioSystem.getTargetDataLine(formatIn);
targetLine.open(formatIn);
targetLine.start();
ByteArrayOutputStream out=new ByteArrayOutputStream();
int numRead=0;
byte[] buff=new byte[40];
while((numRead=targetLine.read(buff,0,buff.length))>0 &&)
out.write(buff, 0, numRead);
}catch(Exception ex){
System.err.println(ex.getMessage());
}
}
18 | P a g e
4. Video
4.1.Introduction
Java Media Framework (JMF) is a framework for handling streaming media in Java programs.
JMF is an optional package of Java 2 standard platform. JMF provides a unified architecture and
messaging protocol for managing the acquisition, processing and delivery of time-based media.
Representing media
All multimedia contents are invariably stored in a compressed form using one of the various
standard formats. Each format basically defines the method used to encode the media. Therefore
we need a class to define the format of the multimedia contents we are handling.
To this end JMF defines the class Format that specifies the common attributes of the media
Format. The class Format is further specialized into the classes AudioFormat and
VideoFormat.
The next most important support an API should offer is the ability to specify the media data
source. Using an URL object we can specify the media source for some file. JMF provides
another class called MediaLocator to locate a media source of any hardware device like
microphone or webcam. The source of the media can be of varying nature. The JMF class
“DataSource” abstracts a source of media and offers a simple connect-protocol to access the
media data.
A DataSink abstracts the location of the media destination and provides a simple protocol for
rendering media into destination. A DataSink can read the media from a DataSource and render
the media to a file or a stream
Important components
Player: A Player takes as input a stream of audio or video data and renders it to a speaker or a
screen. much like a CD player reads a CD and outputs music to the speaker. A Player can have
states, which exist naturally because a Player has to prepare itself and its data source before it
19 | P a g e
can start playing the media. Java Player has many methods like : getVisualComponent();
getControlPanelComponent(); start(); stop(); deallocate();
Processor: A Processor is a type of Player. In the JMF API, a Processor interface extends
Player. As such, a Processor supports the same presentation controls as a Player. Unlike a Player,
a Processor has control over what processing is performed on the input media stream.
In addition to rendering a data source, a Processor can also output media data through a
DataSource so it can be presented by another Player or Processor.
Manager: Manager class is used to create players, processors, datasinks and so on. You can
imagine it as a mapper between JMF components.
If you are getting your media from some file then use URL.
If you are getting your media from some hardware device : microphone or a webcam for
example, then use MediaLocator
After you choose this option you have to extract your DataSource form them and use it in the
creation of either Player or Processor.
If you want only to display your data then you can use Player.
If you want to make any changes in the data and then display it or if you want to send it
anywhere using network or save it to some file then you have to use Processor.
Manager.setHint(Manager.LIGHTWEIGHT_RENDERER, true);
20 | P a g e
// constructing a frame to display our player
f.setLayout(new BorderLayout());
f.add(player.getVisualComponent(), BorderLayout.CENTER);
f.setSize(400, 400);
f.setLocationRelativeTo(null);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
player.start();
Player player;
//Creating Player
player = Manager.createRealizedPlayer(webcamMediaLocator);
21 | P a g e
Component comp;//for Getting Visual Player Component of Camera
if((comp = player.getVisualComponent()) != null) {
f.setLayout(new BorderLayout());
f.add(comp, BorderLayout.CENTER);
f.add(player.getControlPanelComponent(), BorderLayout.SOUTH);
f.setSize(400, 400);
f.setLocationRelativeTo(null);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
player.start();
22 | P a g e
5. Image Compression
5.1.Introduction
Compression is the process of coding that will effectively reduce the total number of bits needed
to represent certain information. Generally we can classify compression methods into two:
lossless and lossy compressions.
Lossless compression: data compressed by this method are digitally identical to the original data
when decoded. It only achieves a modest amount of compression. It is used for applications that
do not tolerate for some errors or losses such as legal and medical documents, computer
programs. Some of the lossless compression methods are Run Length Coding, Huffman Coding,
Dictionary-Based Coding, Arithmetic, etc
Lossy Compression: discards components of the signal that are known to be redundant
(including psycho-visual redundancy), therefore signal is changed from input. It achieves much
higher compression under normal viewing conditions no visible loss is perceived (visually
lossless). It is used for applications in which some errors or losses are tolerated. Some of the
lossy compression methods are Block Transform Coding such as Discrete Cosine Transform
Coding, Discrete Wavelet Transform Coding, Lossy Predictive Coding, etc.
1. Write a method called “encode” that compresses a sequence of characters based on the
Run Length Coding. Run-length coding is a very widely used and simple compression
technique. In this method we replace runs of symbols with pairs of (run-length, symbol).
23 | P a g e
while(index<c.length){
do{
nextSymbol=c[index];
if(nextSymbol==symbol){
run++;
}else{
break;
}
index++;
}while(index<c.length);
compressed+=run;
compressed+=symbol;
symbol=nextSymbol;
run=0;
}
System.out.println("Original="+original);
System.out.println("Compressed="+compressed);
}
int index=0;
int run=0;
byte symbol=buf[index];
byte nextSymbol;
while(index<buf.length){
do{
nextSymbol=buf[index];
if(nextSymbol==symbol){
run++;
}else{
break;
}
index++;
}while(index<buf.length);
outStrm.write(run);
outStrm.write(symbol);
symbol=nextSymbol;
run=0;
24 | P a g e
}
for(int i=0;i<img.getWidth();i++)
for(int j=0;j<img.getHeight();j++)
System.out.println("Original="+raster.getSample(i,j,0));
byte b[]=outStrm.toByteArray();
for(int i=0;i<b.length;i++)
System.out.println("Compressed="+b[i]);
}
In this experiment we will be using our own lossy compressiong technique called “RowJumper”.
As shown in the Figure 5.1, the row jumper compression technique drops every second row of
an image. During the reconstruction, the technique substitutes the missing row by averaging the
two immediate rows: the row above the dropped row and the row below it.
Exercises:
1. Write a method called “encoder” that performs the above compression technique on an
input gray level image.
2. Write a method called “decoder” that takes an image compressed by the above technique
and returns a reconstructed image based on the above de-compression technique
3. Display both the original and reconstructed images and observe their differences
25 | P a g e
6. Animation
Macromedia Flash is a program that is utilized to create movies that include graphics and
animation for Web sites. Flash movies consist primarily of vector graphics, but they can also
contain imported bitmap graphics and sounds. Flash movies can incorporate interactivity to
permit input from users. Flash can also be utilized to create nonlinear movies that can interact
with other Web applications. Web designers use Flash to create navigation controls, animated
logos, long-form animations with synchronized sound and even complete, sensory-rich Web
sites. Flash movies are compact, vector graphics, so they can be downloaded rapidly.
Flash files that are viewable on the Internet are in SWF (Shockwave Flash) file format. The SWF
file is created from an FLA file at the time of publication. An FLA file is the actual project used
to work with in Flash. The FLA holds all of the keyframes and individual movies that are
sandwiched together to make the final animation or SWF file.
Timeline: the timeline indicates where graphics are animated over time.
Stage: the area where the movie plays.
26 | P a g e
Work area: a place to work on objects, it is not viewable when you play your movie.
Toolbox: The toolbox contains all tools necessary for drawing, viewing, coloring and modifying
your objects. Each tool in the toolbox comes with a specific set of options to modify that tool.
27 | P a g e
o If using the Rectangle or the Oval Tools, press the <Shift> key while dragging to
constrain shapes to squares and circles.
o If using the Line Tool, press the <Shift> key while dragging to constrain the line
angles to 45 degrees.
Using Strokes and Fill Colors: Rectangle and Oval Tools create shapes that have stroke
(outline color) and fill (interior color) areas.
To create a group:
Select the objects to include in the group, such as shapes, symbols, and text.
Select Modify Group, or press <Ctrl> + <G>.
Layers are like transparencies stacked on top of each other. When a new Flash movie is created,
it contains one layer. More layers can be added to organize artwork, animation, and other movie
elements. Objects can be drawn and edited on one layer without affecting objects on another
layer.
An unlimited number of layers can be created, and layers do not increase the file size of a
published movie. You can hide layers, lock layers, or display layer contents as outlines. You can
also change the order of layers. Layers are controlled on the Timeline.
Right click on the layer name and select Properties from the shortcut menu.
In the Layer Properties dialog, next to Outline Color, select a color from the palette.
Click OK.
28 | P a g e
6.4.Working with the Timeline
The Timeline organizes and controls a movie’s content over time in layers and frames. The
major components of the Timeline are layers, frames, and the Play Head. Layers in a movie are
listed in a column on the left side of the Timeline. Frames contained in each layer appear in a
row to the right of the layer name. The Timeline header at the top of the Timeline indicates
frame numbers. The Play Head indicates the current frame displayed on the Stage.
Moving the Play Head: The Play Head moves through the Timeline to indicate the current
frame displayed on the stage. The Timeline Header shows the frame numbers of the animation.
Frame Labels and Movie Comments: Frame labels are useful for identifying keyframes in the
Timeline and should be used instead of frame numbers when targeting frames in actions.
Keyframes: A keyframe is a frame in which changes in animation are defined. With frame-by-
frame animation, every frame is a keyframe. In tweened animation, keyframes are defined at
Flash displays the interpolated frames of a tweened animation as light blue or green with an
arrow drawn between keyframes.
29 | P a g e
Flash redraws shapes in each keyframe. Keyframes should only be created at the points in which
something in the artwork changes. Keyframes are indicated in the Timeline. A solid circle
represents a keyframe with content on it, and a vertical line before the frame represents an empty
keyframe. Subsequent frames added to the same layer will have the same content as the
keyframe.
6.5.Creating Animations
Changing the content of successive frames creates animation. With animation, you can make an
object move across the stage, increase or decrease its size, rotate, change color, fade in or out, or
change shape. Changes can occur independently of or in concert with other changes. For
example, an object can be made to rotate and fade in while it moves across the stage.
There are two methods for creating an animation sequence in Flash: frame-by-frame animation
and tweened animation.
Frame-by-frame animation: In frame-by- frame animation you create that object in every
frame, and every frame is a key frame. This is indicated on the timeline with a black circle in
every frame as shown in the figure below.
30 | P a g e
Tweened animation: In tweened animation, starting and ending frames are created, and Flash
creates the frames in between. Tweened animation is indicated on the timeline with a black circle
in the beginning and ending frames and an arrow over the interpolated frames. Flash varies the
object’s size, rotation, color, or other attributes evenly between the starting and ending frames to
create the appearance of movement. Tweened animation is an effective way to create movement
and changes over time while minimizing file size. In tweened animation, only the values for the
changes between frames are stored. In frame-by-frame animation, the values for each complete
frame are stored.
Flash can create two types of tween animation: shape tweening and motion tweening.
Shape tweening
In shape tweening, you draw a shape at one point in time, and then you change that shape or
draw another shape at another point in time. Flash interpolates the values or shapes for the
frames in between, creating the animation. Shape tweening has the effect of morphing shapes,
making one shape appear to change into another shape over time. If tweening is performed on
multiple shapes, all of the shapes must be on the same layer. The location, size and color of
shapes can also be changed. Tweening one shape at a time usually has the best results.
Note: Flash cannot tween the shape of groups, symbols, text blocks, or bitmap images.
To apply shape tweening to grouped objects, use Modify Break Apart.
To tween a shape:
Click a layer name and make it the current layer, and select an empty keyframe where
you want the animation to start.
Create the image for the first frame of the sequence. Use any of the drawing tools to
create the shape.
Create a second keyframe after the desired number of frames from the first keyframe.
Create an image for the last keyframe in the sequence.
Go to the property inspector
In the Frame panel, for Tweening, select Shape.
Motion Tweening
Motion tweening is a technique that tweens the changes in properties of instances, groups, and
type. Flash can tween position, size, rotation, and skew of instances, groups, and type.
31 | P a g e
Additionally, it can tween the color of instances and type, creating gradual color shifts or making
an instance fade in or out.
Note: Flash cannot apply motion tweening to shapes. Motion tweening only applies to instances,
groups, and text.
1. Click a layer name and make it the current layer, and select an empty keyframe where
you want the animation to start.
2. Create the image for the first frame of the sequence. Create and arrange any instances,
groups, and types.
3. Create a second keyframe the desired number of frames after the first keyframe.
4. Do one of the following to modify the instance, group, or text block in the ending frame:
o Move the item to a new position.
o Modify the item’s size, rotation, or skew.
o Modify the item’s color (instance or text block only).
o To tween the color of elements other than instances or text blocks, use shape
tweening.
5. Go to the property inspector.
6. In the Frame panel, for Tweening, select Shape.
7. If the size of the item was modified in step 4, select the Scale to tween the size of the
selected item.
8. Click and drag the arrow next to the Easing value or enter a value to adjust the rate of
change between tweened frames:
o To begin the motion tween slowly and accelerate the tween toward the end of the
animation, drag the slider up or enter a value between –1 and –100.
o To begin the motion tween rapidly and decelerate the tween toward the end of the
animation, drag the slider down or enter a value between 1 and 100.
9. To rotate the selected item while tweening, choose an option from the Rotate menu:
o Select None (the default setting) to apply no rotation.
o Select Auto to rotate the object once in the direction requiring the least motion.
o Select Clockwise (CW) or Counterclockwise (CCW) to rotate the object as
indicated, and then enter a number to specify the number of rotations.
Tweening Motion along a Path: on the Timeline, motion guide layers let you draw paths along
which tweened instances, groups, or text blocks can be animated. You can link multiple layers to
a motion guide layer to have multiple objects follow the same path. A normal layer that is linked
to a motion guide layer becomes a guided layer.
To deliver a Flash animation to an audience, the FLA file must first be published or exported to
another format for playback. The Flash Publish feature is designed for presenting animation on
the Web. The Publish command creates the Flash Player (SWF) file and an HTML document
that inserts the Flash Player file in a browser window.
Publishing a Flash movie on the Web is a two-step process. First, prepare all required files for
the complete Flash application with the Publish Settings command. Then, publish the movie and
all of its files with the Publish command. The Publish settings command lets you choose formats
and specify settings for the individual files included in the movie – including GIF, JPEG, or
33 | P a g e
PNG, and then store these settings with the movie file. Depending on what you specified in the
Publish Settings dialog box, the Publish command then creates the following files:
The Flash movie for the Web file (SWF).
Alternate images in a variety of formats that appear automatically if the Flash Player is
not available (GIF, JPEG, PNG, and QuickTime).
The supporting HTML document required to display the movie (or alternative image) in a
browser and control browser settings.
Stand-alone projectors for both Windows and Macintosh systems and Quicktime videos
from Flash movies (EXE, HQX, or MOV files, respectively).
34 | P a g e