Posts

ArrayList without generics

import java.util.*; public class Main {     public static void main(String[] args) throws Exception {         ArrayList a = new ArrayList();         a.add(1);         a.add("j");         a.add('j');         System.out.println(a.get(0) instanceof Integer);         System.out.println(a.get(1) instanceof String);         System.out.println(a.get(2) instanceof Character);     } } Output: true true true How ArrayList works internally ? https://www.youtube.com/watch?v=jSlmXcE7g7A

Numbers are randomly generated and stored into an expanding array. How would you keep track of the median? - Interview Question

Image
  Concept:  We will be using two heaps - a min-heap and a max-heap to keep track of the median on every step. We will store the first half of the numbers in the max-heap and the second half of the numbers will be stored in the min-heap. If we get the same size of both heaps and we calculate the mean of the roots of min-heap and max-heap. If we get the different sizes of both heaps, the heap with the larger size will contain the median at its root. All this time, we have to make sure that we keep a balance among the sizes of both heaps in such a way that either both of them are equal in size or differ in size only by 1. This will ensure our median candidates always be at the root of both heaps. Note: To simulate the random generation and storage into an expanding array, I am passing a random number in insert() function after some sleep time. This way we can get a feeling that we are getting a stream of numbers continuously. Also, we calculate the median on every insert() Time Comple

Dynamic Programming - Introduction

Image
  What is Dynamic Programming? Dynamic Programming or DP is a method for solving a problem by breaking it into smaller sub-problems, solving the subproblems only once and storing the result of the sub-problems for future use so that we do not solve a sub-problem again and again (like we do in Divide and Conquer). Dynamic Programming is just an optimization of Divide and Conquer. In Divide and Conquer we may solve sub-problem multiple times, but DP prevents that by storing the result of a sub-problem once it is solved once, thus saving on run-time and compromising a little on space complexity. When Should we use the Dynamic Programming approach? When we see these both of these properties in a problem, we can use DP: Optimal Substructure   This property has already been explained in the post of Divide and Conquer , as Divide and Conquer also have this property. Overlapping Subproblems Any problem has overlapping sub-problems if finding its solution involves solving the same sub-problems

Divide and Conquer - Introduction

Image
What is Divide and Conquer? Divide and Conquer is an algorithm paradigm in which we break down a given problem into smaller sub-problems of similar type until the sub-problems are small enough to be solved directly. The solution to these sub-problems are then combined to give a solution to the original problem.  Study the diagram below, it will then make more sense about what we do in Divide and Conquer : Where to Apply Divide and Conquer? The Divide and Conquer can be applied to all those problems that show Optimal Substructure property.  So, What is this Optimal Substructure property now? If we can break down a problem into similar sub-problems and the solution of these sub-problems can be used to create the solution of the whole problem, then that problem is said to have the Optimal Substructure property. For eg. : Fibonacci(n) = Fibonacci(n-1) + Fibonacci(n-2),    In this example, we can see that if we want to find the Fibonacci number at the nth place, we can break this problem in

Activity Selection Problem - Greedy Algorithm

Image
  What is Activity Selection Problem? We are given 'n' activities with their start and finish times and we have to select the maximum number of activities that can be performed by a single person, assuming that the person can only work on a single activity at a time.      Activity  A1   A2   A3   A4   A5   A6   Start Time  0  3  1  5  5  8   End Time   6  4  2  8  7  9 How are we gonna solve this Activity Selection Problem?  [Time Complexity - O(nlogn)  Space Complexity - O(n)] As we want to cover the maximum number of tasks, we should complete all those tasks first that finish first. So our focus is going to be on the end time of tasks.  1. Sort the tasks by the end time 2. Always pick the first task and initialize a variable prevTaskTime = end time of the first task 3. For all other tasks:          4. if the current task's end time is greater than previous tasks end time:                         5. Pick this task to do and update variable prevTaskTime to end time of the

Greedy Algorithm - Introduction

Image
  What is a Greedy Algorithm? A Greedy Algorithm is an algorithm or rather a technique, that always makes a choice that seems to be the best at the moment. This means that it makes a locally optimal choice assuming that this choice will result in a globally optimal solution of the given problem.  As a Greedy Algorithm assumes that the choice it made at the local level is the best choice available at that moment, hence the Greedy Algorithm has only one shot to compute the optimal solution of the given problem. The Greedy Solution never goes back and reverses its decision. Where Should we use a Greedy Algorithm? When we see that the given problem has Optimal Substructure  then we can use the Greedy Algorithm to solve that problem. Optimal Substructure means that optimal solution to the subproblems (optimal local solution) can lead to the optimal solution of the whole problem (optimal global solution). Common Greedy Algoritms Selection Sort Insertion Sort Topological Sort Kruskal's Al

Minimum Spanning Tree - Prim's Algorithm

Image
  What is Prim's Algorithm? Like Kruskal's Algorithm, Prim's Algorithm is also a Greedy Algorithm and is used to find the Minimum Spanning Tree of a graph. In Prim's Algorithm, we will start from a source and then keep on selecting the next vertex which has minimum distance. Also, we avoid going into a cycle by marking the node which we have already processed as 'visited'. What is the difference between Prim's Algorithm and Kruskal's Algorithm? In Kruskal's Algorithm, our focus was on edges. But in Prim's algorithm, our focus will be on vertices/nodes of the graph. Prim's Algorithm [Time Complexity: O(ElogV), Spave Complexity: O(V)] 1. Take any vertex/node of the graph as starting vertex/node and mark its distance as 0. Also, mark the distance of all other nodes as infinity. 2. Push all the vertices in a min-heap. 3. Do till min-heap is not empty:          4. Extract min from minheap and mark that extracted node as visited.          5. For