Wednesday, February 13, 2013
BBC World TV Quiz Game
BBC World TV Quiz Game
1.
Which gate of Delhi was renamed Victoria after
the recapture of the city by the British in 1857?
Ans---Lahori Gate
2.
Who is the only individual to have won a gold
medal at both the summer and winter Olympics?
Eddie Eagan
3.
Who led the Russian army, when it finally gave
battle to Napolean Bonaparte at Borodine in September 1812?
Ans---General Kutusov
4.
Which psychological probe for personality traits
based on the interpretation of ink blots is named after its Swiss creator?
Ans---Rorschach test
5.
What name did the Roman Emperor Gaius Caesar
adopt from the Latin for ‘little boot’?
Ans---Caligula
6.
Who had presented the gem to Sita which she sent
back with Hanuman as a token for Rama?
Ans---Janaka
7.
Before Zinedine Zidane, who was the last player
to score two goals in a world cup final match?
Ans---Mario Kempes of Argentina in 1978
8.
Inspired by the ancient Shilpashastras, which Indian
city was designed and constructed by Vidyadhar Chakarborty?
Ans--Jaipur
9.
Of which political group of the French
Revolution did Tipu Sultan become a member, even while at Seringaptam?
Ans---The Jacobin Club
10.
In 1932 from which city did JRD Tata fly to
Bombay in the first ever commercial mail flight in the subcontinent?
Ans---Karachi
11.
Which stretch of water, by its shape is also
known as La Manche, French for ‘the sleeve’?
Ans---English Channel
12.
What wasthe name of the Portuguese taxation
system by which every Indian ship sailing for trade had to buy passes from the
viceroy of Goa?
Ans---Cartaze
13.
What was the name given to the Tsarist parliament
instituted by Nicholas II in 1905?
Ans---duma
14.
What was Nazi Germany air force called ?
Ans---Luftwaffe
15.
Which social practice was abolished by lord William
Bentinck by ‘Regulation 17’ of 1829?
Ans---Sati
16.
On which Bengali novelist’s story did Mrinal Sen
case his 1969 Hindi film Buvan Shome’?
Ans—Bonophool by bolai chandra
17.
What political movement was started in 1870 by
Isaac Butt in Ireland and later influenced a similar movement in india?
Ans---home Rule Association
18.
Which greek mythological figure was endowed by
the gift of prophecy but was fated never to be belived?
Ans---Cassandra
19.
How many colours are there on the squares of
Rubik cube?
Ans---Six
20.
The name of which US public opinion statistician
has now become a generic term for sample surveys of public opinion?
Ans--- George Gallup
Monday, February 4, 2013
ANALYSIS AND DESIGN OF ALGORITHMS
What do you mean by algorithm analyze
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.
In computer science, the analysis of algorithms is the determination of the number of resources (such as time and storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2 n + 1 time units are needed to return an answer.
Distinguish between quick and heap sort
Heapsort:
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time.
Quicksort, or partition-exchange sort, is a sorting algorithm that, on average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2) comparisons, though this behavior is rare. Quicksort is often faster in practice than other O(n log n) algorithms.[1] Additionally, quicksort's sequential and localized memory references work well with a cache. Quicksort can be implemented with an in-place partitioning algorithm, so the entire sort can be done with only O(log n) additional space.
Quicksort is a comparison sort and, in efficient implementations, is not a stable sort. It was developed by Tony Hoare.
Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists.
The steps are:
Pick an element, called a pivot, from the list.
Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
Recursively sort the sub-list of lesser elements and the sub-list of greater elements.
The base case of the recursion are lists of size zero or one, which never need to be sorted.
What is sorting? Explain insertion sorting in details.
Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct position within the sorted list, shifts all the larger values up to make a space, and inserts into that correct position.
The resulting array after k iterations has the property where the first k + 1 entries are sorted ("+1" because the first entry is skipped). In each iteration the first remaining entry of the input is removed, and inserted into the result at the correct position.
The most common variant of insertion sort, which operates on arrays, can be described as follows:
Suppose there exists a function called Insert designed to insert a value into a sorted sequence at the beginning of an array. It operates by beginning at the end of the sequence and shifting each element one place to the right until a suitable position is found for the new element. The function has the side effect of overwriting the value stored immediately after the sorted sequence in the array.
To perform an insertion sort, begin at the left-most element of the array and invoke Insert to insert each element encountered into its correct position. The ordered sequence into which the element is inserted is stored at the beginning of the array in the set of indices already examined. Each insertion overwrites a single value: the value being inserted.
Distinguish between linear and binary search.
| Binary search | Linear search |
-------------------------------------------------------------------------------------------------------------------------
1).Data must be in a sorted order | 1).Data any order
2).Time complexity is O(log n) | 2).Time complexity is O(n).
3).Only 1 "When" condition used | 3).Any no. of "When" condition used
4).Only "=" relation operator is used | 4).any relation operator is used
5).Access is faster | 5).Access is slow
6).Only single dimensional array used | 6).single/multi dimensional array used
A linear search works by looking at each element in a list of data until it either finds the target or reaches the end. This results in O(n) performance on a given list.
A binary search comes with the prerequisite that the data must be sorted. We can leverage this information to decrease the number of items we need to look at to find our target. We know that if we look at a random item in the data (let's say the middle item) and that item is greater than our target, then all items to the right of that item will also be greater than our target. This means that we only need to look at the left part of the data. Basically, each time we search for the target and miss, we can eliminate half of the remaining items. This gives us a nice O(log n) time complexity.
Just remember that sorting data, even with the most efficient algorithm, will always be slower than a linear search (the fastest sorting algorithms are O(n * log n)). So you should never sort data just to perform a single binary search later on. But if you will be performing many searches (say at least O(log n) searches), it may be worthwhile to sort the data so that you can perform binary searches. You might also consider other data structures such as a hash table in such situations.
binary search runs in O(logn) time whereas linear search runs in O(n) times thus binary search has better performance.
Describe the efficiency of linear search
In computer science, linear search or sequential search is a method for finding a particular value in a list, that consists of checking every one of its elements, one at a time and in sequence, until the desired one is found.
Linear search is the simplest search algorithm; it is a special case of brute-force search. Its worst case cost is proportional to the number of elements in the list; and so is its expected cost, if all list elements are equally likely to be searched for. Therefore, if the list has more than a few elements, other methods (such as binary search or hashing) will be faster, but they also impose additional requirements.
Efficiency of Linear Search
Having implemented the linear search algorithm, how would you measure its efficiency? A useful measure (or metric) would be general, applicable to any (search) algorithm. Since a more efficient algorithm would take less time to execute, one approach would be to write a program for each of the algorithms to be compared, and execute them, measuring the time each takes to finish. However, a better metric would allow algorithms to be evaluated before implementing them.
One such metric is the number of main steps the algorithm will require to finish. Of course, the exact number of steps depends on the input data. For the linear search algorithm, the number of steps depends on whether the target is in the list, and if so, where in the list, as well as on the length of the list.
For search algorithms, the main steps are the comparisons of list values with the target value. Counting these for data models representing the best case, the worst case, and the average case produces the following table. For each case, the number of steps is expressed in terms of n, the number of items in the list.
What do you mean by minimum spanning tree?
Given a connected, undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees. We can also assign a weight to each edge, which is a number representing how unfavorable it is, and use this to assign a weight to a spanning tree by computing the sum of the weights of the edges in that spanning tree. A minimum spanning tree (MST) or minimum weight spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree. More generally, any undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of minimum spanning trees for its connected components.
One example would be a telecommunications company laying cable to a new neighborhood. If it is constrained to bury the cable only along certain paths, then there would be a graph representing which points are connected by those paths. Some of those paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects to every house. There might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost.
Subscribe to:
Posts (Atom)