Ds
Ds
Ds
First and foremost, do not even walk into a software interview without knowing what Big O Analysis is all about you will embarrass yourself. Its simply something that you must know if you expect to get a job in this industry. Here we present a tutorial on Big O Notation, along with some simple examples to really help you understand it.
// set smallest value to first item in array curMin = array[0]; // iterate through array to find smallest value for (x = 1; x < 10; x++) { if( array[x] < curMin) { curMin = array[x]; } } // return smallest value in the array return curMin; }
As promised, we want to show you another solution to the problem. In this solution, we will use a different algorithm. What we do is compare each value in the array to all of the other numbers in the array, and if that value is less than or equal to all of the other numbers in the array then we know that it is the smallest number in the array.
int CompareToAllNumbers (int array[ ]) { bool is Min; int x, y; // iterate through each for (int x = 0; x < 10; x++) { isMin = true; for (int y = 0; y < 10; y++) { /* compare the value in array[x] to the other values if we find that array[x] is greater than any of the values in array[y] then we know that the value in array[x] is not the minimum remember that the 2 arrays are exactly the same, we are just taking out one value with index 'x' and comparing to the other values in the array with index 'y' */
if( array[x] > array[y]) isMin = false; } if(isMin) break; } return array[x]; }
Now, you've seen 2 functions that solve the same problem - but each one uses a different algorithm. We want to be able to say which algorithm is more efficient, and Big-O analysis allows us to do exactly that.
In the function CompareSmallestNumber, the n (we used 10 items, but lets just use the variable 'n'
for now) input items are each 'touched' only once when each one is compared to the minimum value. In Big O notation, this would be written as O(n) - which is also known as linear time. Linear time means that the time taken to run the algorithm increases in direct proportion to the number of input items. So, 80 items would take longer to run than 79 items or any quantity less than 79. You might also see that in the CompareSmallestNumber function, we initialize the curMin variable to the first value of the input array. And that does count as 1 'touch' of the input. So, you might think that our Big O notation should be O(n + 1). But actually, Big O is concerned with the running time as the number of inputs - which is 'n' in this case - approaches infinity. And as 'n' approaches infinity the constant '1' becomes very insignificant - so we actually drop the constant. Thus, we can say that the CompareSmallestNumber function has O(n) and not O(n + 1). Also, if we have n 3 + n, then as n approaches infinity it's clear that the "+ n" becomes very insignificant - so we will drop the "+ n", and instead of having O(n3 + n), we will have O(n3). Now, let's do the Big O analysis of the CompareToAllNumbers function. Let's just say that we want to find the worst case running time for this function and use that as the basis for the Big O notation. So, for this function, let's assume that the smallest integer is in the very last element of the array. Since we are taking each element in the array and comparing it to every other element in the array, that means we will be doing 100 comparisons, if we are assuming our input size is 10 (10 * 10 = 100). Or, if we use a variable that will n2 'touches' of the input size. Thus, this function uses a O(n2 ) algorithm.
In a breadth first search, you start at the root node, and then scan each node in the first level starting from the leftmost node, moving towards the right. Then you continue scanning the second level (starting from the left) and the third level, and so on until youve scanned all the nodes, or until you find the actual node that you were searching for. In a BFS, when traversing one level, we need some way of knowing which nodes to traverse once we get to the next level. The way this is done is by storing the pointers to a levels child nodes while searching that level. The pointers are stored in FIFO (First-In-First-Out) queue. This, in turn, means that BFS uses a large amount of memory because we have to store the pointers.
Subscribe to our newsletter on the left to receive more free interview questions!
An example of BFS
Heres an example of what a BFS would look like. The numbers represent the order in which the nodes are accessed in a BFS:
In a depth first search, you start at the root, and follow one of the branches of the tree as far as possible until either the node you are looking for is found or you hit a leaf node ( a node with no children). If you hit a leaf node, then you continue the search at the nearest ancestor with unexplored children.
An example of DFS
Heres an example of what a DFS would look like. The numbers represent the order in which the nodes are accessed in a DFS:
Comparing BFS and DFS, the big advantage of DFS is that it has much lower memory requirements than BFS, because its not necessary to store all of the child pointers at each level. Depending on the data and what you are looking for, either DFS or BFS could be advantageous. For example, given a family tree if one were looking for someone on the tree whos still alive, then it would be safe to assume that person would be on the bottom of the tree. This means that a BFS would take a very long time to reach that last level. A DFS, however, would find the goal faster. But, if one were looking for a family member who died a very long time ago, then that person would be closer to the top of the tree. Then, a BFS would usually be faster than a DFS. So, the advantages of either vary depending on the data and what youre looking for.
What are the differences between a hash table and a binary search tree? Suppose that you are trying to figure out which of those data structures to use when designing the address book for a cell phone that has limited memory. Which data structure would you use?
A hash table can insert and retrieve elements in O(1) (for a big-O refresher read here). A binary search tree can insert and retrieve elements in O(log(n)), which is quite a bit slower than the hash table which can do it in O(1).
alphabetical order it is an address book after all. So, by using a hash table you have to set aside memory to sort elements that would have otherwise be used as storage space.
Suppose that you are given a linked list that is either circular or or not circular (another word for not circular is acyclic). Take a look at the figures below if you are not sure what a circular linked list looks like. Write a function that takes as an input a pointer to the head of a linked list and determines whether the list is circular or if the list has an ending node. If the linked list is circular then your function should return true, otherwise your function should return false if the linked list is not circular. You can not modify the linked list in any way.
This is an acyclic (non-circular) linked list:
You should start out this problem by taking a close look at the pictures of a circular linked list and a singly linked list so that you can understand the difference between the 2 types of lists.
The difference between the 2 types of linked lists is at the very last node of the list. In the circular linked list, you can see that the very last node (37) links right back to the first node in the list which makes it circular. However, in the acyclic or non-circular linked list you can see that the end node does not point to another node in the linked list and just ends. It is easy enough to know when you are in an acyclic linked list the pointer in the end node will just be pointing to NULL. However, knowing when you are in a circularly linked list is more difficult because there is no end node, so your function would wind up in an infinite loop if it just searches for an end node. So, there has to be a solution other than just looking for a node that points to NULL, since that clearly will not work. Take a closer look at the end node (37) in the circular linked list. Note that it points to the head node (12), and that there must also be a head pointer that points to the head node. This means that 12 is the only node that has 2 pointers pointing to it. And, suppose 37 pointed to 99, then this would still be a circularly linked list, and 99 would be the only node that has 2 pointers pointing to it. So, clearly, in a circular linked list there will always be a node that has 2 pointers pointing to it. Now the question is whether we can use that property to somehow provide a solution to this problem by traversing the list and checking every node to determine whether 2 nodes are pointing at it? If true, the list is circular. If not true, the list is acyclic and you will eventually run into a NULL pointer at the last element.
Lets figure out what the Big-O is of our algorithm is (for a refresher on Big-O Notation read this:Big O Notation Explained). If we are at the first node, then we examine 0 nodes; if we are at the 2nd node then 1 node is examined, if we are at the 3rd node then 2 nodes are examined. This means that the algorithm examines 0 + 1 + 2 + 3 + . + n nodes. The sum of 0 + 1 + 2 ++ n is equal to (n 2)/2 + n/2. And when calculating the Big-O of an algorithm we take off everything except the highest order term, so this means our algorithm is O(n2).
2 pointers travelling at different speeds start from the head of the linked list Iterate through a loop If the faster pointer reaches a NULL pointer then return that list is acyclic and not circular If the faster pointer is ever equal to the slower pointer or the faster pointer's next pointer is ever equal to the slower pointer then return that the list is circular Advance the slower pointer one node Advance the faster pointer by 2 nodes
If we write the actual code for this, it would look like this:
bool findCircular(Node *head) { Node *slower, * faster; slower = head; faster = head; while(true) {
// if the faster pointer encounters a NULL element if( !faster || !faster->next) return false; //if faster pointer ever equals slower or faster's next //pointer is ever equal to slow then it's a circular list else if (faster == slower || faster->next == slower) return true; else{ // advance the pointers slower = slower->next; faster = faster->next->next; } } }
And there is our solution. What is the Big-O of our solution? Well, whats the worst case if we know that the list is circular? In this case, the slower pointer will never go around any loop more than once so it will examine a maximum of n nodes. The faster pointer, however, will traverse through 2n nodes and examine half of those nodes (n nodes). The faster pointer will pass the slower pointer regardless of the size of the circle which makes it a worse case of n +n, which equals 2n nodes. This is O(n).
Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do a preorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 7, 2, 6, 5, 11, 5, 9, 4
When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed a preorder traversal keeps traversing the left most part of the tree until a leaf node (which has no child nodes) is encountered, then it goes up the tree, goes to the right child node (if any), and then goes up the tree again, and then as far left as possible, and this keeps repeating until all of the nodes are traversed. So, it looks like each sub-tree within the larger tree is being traversed in the same pattern which should make you start thinking in terms of breaking this problem down into sub-trees. And anytime a problem is broken down into smaller problems that keep repeating, you should immediately start thinking in recursion to find the most efficient solution. So, lets take a look at the 2 largest sub-trees and see if we can come up with an appropriate algorithm. You can see in the figure above that the sub-trees of 7 and 5 (child nodes of the root at 2) are the 2 largest subtrees. Lets start by making observations and see if we can convert those observations into an actual algorithm. First off, you can see that all of the nodes in the subtree rooted at 7 (including 7 itself) are printed out before the subtree rooted at 5. So, we can say that for any given node, the subtree of its left child is printed out before the subtree of its right child. This sounds like a legitimate algorithm, so we can say that when doing a preorder traversal, for any node we would print the node itself, then follow the left subtree, and after that follow the right subtree. Lets write that out in steps:
1. Print out the root's value, regardless of whether you are at the actual root or just the subtree's root. 2. Go to the left child node, and then perform a pre-order traversal on that left child node's subtree. 3. Go to the right child node, and then perform a pre-order traversal on that right child node's subtree. 4. Do this recursively.
This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the tree, and that Node class must also have some methods that would allow us to go to the left and right nodes. This is what it would look like in Java psuedocode:
public class Node { private Node right; private Node left; private int nodeValue; public Node ( ) { // a Java constructor } public Node leftNode() {return left;} public Node rightNode() {return right;} public int getNodeValue() {return nodeValue;} }
Given the Node class above, lets write a recursive method that will actually do the preorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us.
void preOrder (Node root) { if(root == null) return; root.printNodeValue(); preOrder( root.leftNode() ); preOrder( root.rightNode() ); }
Because every node is examined once, the running time of this algorithm is O(n).
Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do an inorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 7, 5, 6, 11, 2, 5, 4, 9 note that the very first 2 that is printed out is the left child of 7, and NOT the 2 in the root node
When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed. In an inorder traversal If a given node is the parent of some other node(s) then we traverse to the left child. If there is no left child, then go to the right child, and traverse the subtree of the right child until you encounter the leftmost node in that subtree. Then process that left child. And then you process the current parent node. And then, the traversal pattern is repeated. So, it looks like each sub-tree within the larger tree is being traversed in the same pattern which should make you start thinking in terms of breaking this problem down into sub-trees. And anytime a problem is broken down into smaller problems that keep repeating, you should immediately start thinking in recursion to find the most efficient solution. So, lets take a look at the 2 largest sub-trees and see if we can come up with an appropriate algorithm. You can see in the figure above that the sub-trees of 7 and 5 (child nodes of the root at 2) are the 2 largest subtrees.
1. Go the left subtree, and perform an inorder traversal on that node. 2. Print out the value of the current node.
3. Go to the right child node, and then perform an inorder traversal on that right child node's subtree.
This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the tree, and that Node class must also have some methods that would allow us to go to the left and right nodes, and also a method that would allow us to print a Nodes value. This is what it would look like in Java psuedocode:
public class Node { private Node right; private Node left; private int nodeValue; public Node ( ) { // a Java constructor } public Node leftNode() {return left;} public Node rightNode() {return right;} public int getNodeValue() {return nodeValue;}
Given the Node class above, lets write a recursive method that will actually do the inorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us.
void inOrder (Node root) { if(root == null) return; inOrder( root.leftNode() ); root.printNodeValue(); inOrder( root.rightNode() ); }
Because every node is examined once, the running time of this algorithm is O(n).
Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do a postorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 5, 11, 6, 7, 4, 9, 5, 2 where the very last node visited is the root node
When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed there is a pattern in the way that the nodes are traversed. If you break the problem down into subtrees you can see that these are the operations that are being performed recursively at each node:
1. 2. 3.
Traverse the left subtree Traverse the right subtree Visit the root
This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the tree, and that Node class must also have some methods that would allow us to go to the left and right nodes. This is what it would look like in Java:
public class Node { private Node right; private Node left; private int nodeValue; public Node ( ) { // a Java constructor } public Node leftNode() {return left;} public Node rightNode() {return right;} public int getNodeValue() {return nodeValue;} }
Given the Node class above, lets write a recursive method that will actually do the postorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us.
void postOrder (Node root) { if(root == null) return; postOrder( root.leftNode() ); postOrder( root.rightNode() ); root.printNodeValue(); }
Because every node is examined once, the running time of this algorithm is O(n).
How do threads interact with the stack and the heap? How do the stack and heap work in multithreading?
In a multi-threaded application, each thread will have its own stack. But, all the different threads will share the heap. Because the different threads share the heap in a multi-threaded application, this also means that there has to be some coordination between the threads so that they dont try to access and manipulate the same piece(s) of memory in the heap at the same time.
void somefunction( ) { /* create an object "m" of class Member this will be put on the stack since the "new" keyword is not used, and we are creating the object inside a function */ Member m; } //the object "m" is destroyed once the function ends
So, the object m is destroyed once the function has run to completion or, in other words, when it goes out of scope. The memory being used for the object m on the stack will be removed once the function is done running. If we want to create an object on the heap inside a function, then this is what the code would look like:
void somefunction( ) { /* create an object "m" of class Member this will be put on the heap since the "new" keyword is used, and we are creating the object inside a function */ Member* m = new Member( ) ; /* the object "m" must be deleted otherwise a memory leak occurs */ delete m; }
In the code above, you can see that the m object is created inside a function using the new keyword. This means that m will be created on the heap. But, since m is created using the new keyword, that also means that we must delete the m object on our own as well otherwise we will end up with a memory leak.
How long does memory on the stack last versus memory on the heap
Once a function call runs to completion, any data on the stack created specifically for that function call will automatically be deleted. Any data on the heap will remain there until its manually deleted by the programmer.
Can the stack grow in size? Can the heap grow in size?
The stack is set to a fixed size, and can not grow past its fixed size (although some languages have extensions that do allow this). So, if there is not enough room on the stack to handle the memory being assigned to it, a stack overflow occurs. This often happens when a lot of nested functions are being called, or if there is an infinite recursive call. If the current size of the heap is too small to accommodate new memory, then more memory can be added to the heap by the operating system. This is one of the big differences between the heap and the stack.
For people new to programming, its probably a good idea to use the stack since its easier. Because the stack is small, you would want to use it when you know exactly how much memory you will need for your data, or if you know the size of your data is very small. Its better to use the heap when you know that you will need a lot of memory for your data, or you just are not sure how much memory you will need (like with a dynamic array).
x! = x * (x - 1)!
and 0! = 1! = 1
Note how we defined the factorial of a number as that number multiplied by the factorial of the integer that is 1 less than the number (x * (x-1)! ). So, what we have done is essentially break the problem down into a sub-task, and in order to find the factorial of a number we just keep finding the factorials of the integers below that number and multiplying. So, the factorial of 3 is equal to 3 multiplied by the factorial of 2 and the factorial of 2 is equal to 2 multiplied by the factorial of 1. And that is what recursion is all about finding repetitive patterns, and breaking a problem down into repetitive subtasks. But, there is still one issue we seem to have found a recursive case, in which the routine will call itself, but what about the base case? Remember that we must have both a recursive case and a base case. The base case is what will stop the routine from calling itself infinitely, and will stop the recursion. Think about this what do you think would be a good base case for this problem? Well, it turns out that the base case would be when the factorial function hits a value of 1 because at that point we know the factorial of 1 is 1, so we should stop right there. And, it doesnt make sense to allow the function find the factorial of numbers less than 1, since the factorial is defined for integers between x and 1. So, heres what the Java code for our recursive factorial method would look like:
public int factorial (int x) { if (x > 1) { //recursive case: return factorial(x-1) * x; } else /*base case*/ return 1; }
executing. For example, suppose we have a method CreateBox which calls another method CreateLine in 4 different places. If the program has finished executing the method CreateLine, then it needs to know where in the CreateBox method it needs to return to. This is why the program uses a call stack so that it can keep track of these details.
You can see that the first stack frame is created with x equal to 3. And then a call to Factorial(2) is made so the 1st call to Factorial does not run to completion because another call is made before the current call to Factorial can run to completion. A stack frame is used to hold the state of the first call to Factorial it will store the variables (and their values) of the current invocation of Factorial, and it will also store the return address of the method that called it. This way, it knows where to return to when it finishes running.
Finally, in the 3rd stack frame, we run into our base case, which means the recursive calls are finished and then control is returned to the 2nd stack frame, where Factorial(1) * 2 is calculated to be 2, and then control is returned to the very first stack frame. Finally, our result of 6 is returned.
What are some of the differences between using recursion to solve a problem versus using iteration? Which one is faster? Which one uses more memory?
The fact is that recursion is rarely the most efficient approach to solving a problem, and iteration is almost always more efficient. This is because there is usually more overhead associated with making recursive calls due to the fact that the call stack is so heavily used during recursion (for a refresher on this, read here). This means that many computer programming languages will spend more time maintaining the call stack then they will actually performing the necessary calculations.
Write a method in Java that will print out all the possible combinations (or permutations) of the characters in a string. So, if the method is given the string dog as input, then it will print out the strings god, gdo, odg, ogd, dgo, and dog since these are all of the possible permutations of the string dog. Even if a string has characters repeated, each character should be treated as a distinct character so if the string xxx is input then your method will just print out xxx 6 times.
Finding an algorithm to answer this question may seem challenging because finding all the different permutations of a string is something that you just do naturally without really thinking about it. But, try to figure out what algorithm you are implicitly using in your mind whenever you write down all the different permutations of a string. Lets use the word dogs as an example and see what different permutations we get. Here is what comes up when we list out all the possible permutations of the letters in dogs:
So, when we came up with the permutations of dogs above, how did we do it? What were we
implicitly thinking? Well, it looks like what we did in each column was choose one letter to start with, and then we found all the possible combinations for the string beginning with that letter. And once we picked a letter in the 2nd position, we then found all the possible combinations that begin with that 2 letter sequence before we changed any of the letters in the first 2 positions. So, basically what we did was choose a letter and then peformed the permutation process starting at the next position to the right before we come back and change the character on the left.
void permute( String input) { int inputLength = input.length(); boolean[ ] used = new boolean[ inputLength ]; StringBuffer outputString = new StringBuffer(); char[ ] in = input.toCharArray( ); doPermute ( in, outputString, used, inputLength, 0 ); } void doPermute ( char[ ] in, StringBuffer outputString,
boolean[ ] used, int inputlength, int level) { if( level == inputLength) { System.out.println ( outputString.toString()); return; } for( int i = 0; i < inputLength; ++i ) { if( used[i] ) continue; outputString.append( in[i] ); used[i] = true; doPermute( in, outputString, used, length, level + 1 ); used[i] = false; outputString.setLength( outputString.length() - 1 ); } }
Provide the Java code that would be used find the factorial of a number using iteration and not recursion in other words use a loop to find the factorial of a number.
Earlier we had discussed how to find the factorial of a number using recursion. Now, if we want to find the factorial of a number using iteration instead of recursion, how would we do that? Its actually pretty simple, and its something you should try to figure out on your own. A factorial of a number x is defined as the product of x and all positive integers below x. To calculate the factorial in a for loop, it seems like all we would have to do is start from x and then multiply by all integer values below x, and just hold that value until we are done iterating. And that is exactly what needs to be done. Here is what the code will look like in Java: