Tuesday, 28 April 2015



Detail Architecture of 8051

Description

Architecture of 8051

1. Oscillator and clock generator:

All operations in a microcontroller are synchronized by the help of an oscillator clock. The oscillator clock generates the clock pulses by which all internal operations are synchronized. A resonant network connected through pins XTAL1 and XTAL2 forms up an oscillator. For this purpose a quartz crystal and capacitors are employed. The crystal run at specified maximum and minimum frequencies typically at 1 MHz to 16 MHz. 2. ALU:

It is 8 bit unit. It performs arithmetic operation as addition, subtraction, multiplication, division, increment and decrement. It performs logical operations like AND, OR and EX-OR. It manipulates 8 bit and 16 bit data. It calculates address of jump locations in relative branch instruction. It performs compare, rotate and compliment operations. It consists of Boolean processor which performs bit, set, test, clear and compliment. 8051 micro controller contains 34 general purpose registers or working registers.2 of them are called math registers A & B and 32 are bank of registers. a. Accumulator(A-reg):

It is 8 bit register. Its address is E0H and it is bit and byte accessible. Result of arithmetic & logic operations performed by ALU is accumulated by this register. Therefore it is called accumulator register. It is used to store 8 bit data and to hold one of operand of ALU units during arithmetical and logical operations. Most of the instructions are carried out on accumulator data. It is most versatile of 2 CPU registers. b. B-register:

It is special 8 bit math register. It is bit and byte accessible. It is used in conjunction with A register as I/P operand for ALU. It is used as general purpose register to store 8 bit data. c. PSW:

It is 8 bit register. Its address is D0H and It is bit and byte accessible. It has 4 conditional flags or math flags which sets or resets according to condition of result. It has 3 control flags, by setting or resetting bit required operation or function can be achieved. The format of flag register is as shown below:

i. MATH FLAG:

1. Carry Flag(CY):

During addition and subtraction any carry or borrow is generated then carry flag is set otherwise carry flag resets. It is used in arithmetic, logical, jump, rotate and Boolean operations.

2. Auxiliary carry flag(AC):

If during addition and subtraction any carry or borrow is generated from lower 4 bit to higher 4 bit then AC sets else it resets. It is used in BCD arithmetic operations.

3. Overflow flag(OV):

If in signed arithmetic operations result exceeds more than 7 bit than OV flag sets else resets.It is used in signed arithmetic operations only.

4. Parity flag(P):

If in result, even no. Of ones “1” are present than it is called even parity and parity flag sets. In result odd no. Of ones “1”are present than it is called odd parity and parity flag resets.

ii. CONTROL FLAGS:

1. FO:

It is user defined flag. The user defines the function of this flag. The user can set ,test n clear this flag through software.

2. RS1 and RS0:

These flags are used to select bank of register by resetting those flags which are as shown in table :

3.Program counter(PC):

The Program Counter (PC) is a 2-byte address which tells the 8051 where the next instruction to execute is found in memory. It is used to hold 16 bit address of internal RAM, external RAM or external ROM locations. When the 8051 is initialized PC always starts at 0000h and is incremented each time an instruction is executed. It is important to note that PC isnt always incremented by one and never decremented.

4. Data pointer register(DTPR):

It is a 16 bit register used to hold address of external or internal RAM where data is stored or result is to be stored. It is used to store 16 bit data. It is divided into2- 8bit registers, DPH-data pointer higher order (83H) and DPL-data pointer lower order (82H). Each register can be used as general purpose register to store 8 bit data and can also be used as memory location. DPTR does not have single internal address. It functions as Base register in base relative addressing mode and in-direct jump.

5. Stack pointer(SP):

It is 8-bit register. It is byte addressable. Its address is 81H. It is used to hold the internal RAM memory location addresses which are used as stack memory. When the data is to be placed on stack by push instruction, the content of stack pointer is incremented by 1, and when data is retrieved from stack, content of stack of stack pointer is decremented by 1.

iii. Special function Registers(SFR): The 8051 microcontroller has 11 SFR divided in 4 groups: A. Timer/Counter register: 8051 microcontroller has 2-16 bit Timer/counter registers called Timer-reg-T0 And Timer/counter Reg-T1.Each register is 16 bit register divide into lower and higher byte register as shown below: These register are used to hold initial no. of count. All of the 4 register are byte addressable.

1. Timer control register:

8051 microcontroller has two 8-bit timer control register i.e. TMOD and TCON register. TMOD Register: it is 8-bit register. Its address is 89H. It is byte addressable. It used to select mode and control operation of time by writing control word.

2. TCON register:

It is 8-bit register. Its address is 88H. It is byte addressable. Its MSB 4-bit are used to control operation of timer/ counter and LSB 4-bit are used for external interrupt control.

B. Serial data register: 8051 micro controller has 2 serial data register viz. SBUF and SCON.

1. Serial buffer register (SBUF):

it is 8-bit register. It is byte addressable .Its address is 99H. It is used to hold data which is to be transferred serially.

2. Serial control register (SCON):

it is 8-bit register. It is bit/byte addressable. Its address is 98H. The 8-bit loaded into this register controls the operation of serial communication.

C. Interrupt register: 8051 µC has 2 8-bit interrupt register.

1. Interrupt enable register (IE):

it is 8-bit register. It is bit/byte addressable. Its address is A8H.it is used to enable and disable function of interrupt.

2. Interrupt priority register (IP):

It is 8-bit register. It is bit/byte addressable. Its address is B8H. it is used to select low or high level priority of each individual interrupts.

D. Power control register (PCON): it is 8-bit register. It is byte addressable .Its address is 87H. its bits are used to control mode of power saving circuit, either idle or power down mode and also one bit is used to modify baud rate of serial communication.

Internal RAM

Internal RAM has memory 128-byte. See 8051 hardware for further internal RAM design. Internal RAM is organized into three distinct areas: 32 bytes working registers from address 00h to 1Fh 16 bytes bit addressable occupies RAM byte address 20h to 2Fh, altogether 128 addressable bits General purpose RAM from 30h to 7Fh.

Internal ROM

Data memory and program code memory both are in different physical memory but both have the same addresses. An internal ROM occupied addresses from 0000h to 0FFFh. PC addresses program codes from 0000h to 0FFFh. Program addresses higher than 0FFFh that exceed the internal ROM capacity will cause 8051 architecture to fetch codes bytes from external program memory.

Wednesday, 22 April 2015

  • Concept Of hashing• Need of Hashing• Hash Collision• Dealing with Hash Collision• Resolving Hash Collisions by Open Addressing• Primary clustering• Double Rehash
  • 2. Concept Of hashing• Hashing: hashing is a technique for performing almost constant time in case of insertion deletion and find operation.• taking a very simple example, as array with its index as key is the example of table.• So each index (key) can be used for accessing values in the constant search time.• Mapping key must be simple to compute and must help in identifying the associated records.• Function that help us in generating such type of keys is termed as Hash Function.
  • 3. Hashing• let h(key) is hashing function that returns the hash code. h(key) = key%1000, which can produce any value between 0 and 999. as shown in figure:
  • 4. Need of Hashing• Hashing maps large data sets of variable length to smaller data sets of a fixed length. For example, an inventory file of a company having more than 100 items and the key to each record is a seven digit part number. To use direct indexing using entire seven digit key, an array of 10 million elements would be required. Which clearly is wastage of space, since company is unlikely to stock more than few thousand parts.• Hence hashing provides an alternative to convert seven digit key into an integer within limited range. The values returned by a hash function are called hash values, hash codes.
  • 5. • Suppose two keys k1 and k2 hashes such that h(k1) = h(k2). Here two keys hashes into the same value and are supposed to occupy same slot in hash table ,which is unacceptable.• Such a situation is termed as hash collision.
  • 6. Dealing with Hash Collision• Two methods to deal with hash collision are:• Rehashing and Chaining Rehashing: invokes a secondary hash function (say Rh(key)), which is applied successively until an empty slot is found, where a record can be placed. Chaining: builds a Linked list of items whose key hashes to same value. During search this short linked list is traversed sequentially for the desired key. This technique requires extra link field to each table position.
  • 7. hashingAnalysis:• The worst case running time for insertion is O(1).• Deletion of an element x can be accomplished in O(1) time if the lists are doubly linked.• In the worst case behaviour of chain-hashing, all n keys hash to the same slot, creating a list of length n. The worst-case time for search is thus θ(n) plus the time to compute the hash function.
  • 8. A good hash function is one that minimizes collision and spreads the records uniformly throughout the table. that is why it is desirable to have larger array size than actual number of records. More formally, suppose we want to store a set of size n in a table of size m. The ratio α = n/m is called a load factor, that is, the average number of elements stored in a Table.
  • 1. Prims Algorithm on minimum spanning tree
  • 2. What is Minimum Spanning Tree?• Given a connected, undirected graph, a spanning tree of that graph is a subgraph which is a tree and connects all the vertices together.• A single graph can have many different spanning trees.• A minimum spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree.
  • 3. graph GSpanning Tree from Graph G2 2 4 3 4 51 1 1
  • 4. Algorithm for finding Minimum Spanning Tree• The Prims Algorithm• Kruskals Algorithm• Baruvkas Algorithm
  • 5. About Prim’s AlgorithmThe algorithm was discovered in 1930 bymathematician Vojtech Jarnik and later independentlyby computer scientist Robert C. Prim in 1957.The algorithm continuously increases the size of atree starting with a single vertex until it spans all thevertices. Prims algorithm is faster on densegraphs.Prims algorithm runs in O(n*n)But the running time can be reduceusing a simple binary heap data structureand an adjacency list representation
  • 6. • Prims algorithm for finding a minimal spanning tree parallels closely the depth- and breadth-first traversal algorithms. Just as these algorithms maintained a closed list of nodes and the paths leading to them, Prims algorithm maintains a closed list of nodes and the edges that link them into the minimal spanning tree.• Whereas the depth-first algorithm used a stack as its data structure to maintain the list of open nodes and the breadth-first traversal used a queue, Prims uses a priority queue.
  • 7. Let’s see an example to understand Prim’s Algorithm.
  • 8. Lets…. At first we declare an array named: closed list. And consider the open list as a priority queue with min-heap. Adding a node and its edge to the closed list indicates that we have found an edge that links the node into the minimal spanning tree. As a node is added to the closed list, its successors (immediately adjacent nodes) are examined and added to a priority queue of open nodes.
  • 9. Total Cost: 0Open List: dClose List:
  • 10. Total Cost: 0Open List: a, f, e, bClose List: d
  • 11. Total Cost: 5Open List: f, e, bClose List: d, a
  • 12. Total Cost: 11Open List: b, e, gClose List: d, a, f
  • 13. Total Cost: 18Open List: e, g, cClose List: d, a, f, b
  • 14. Total Cost: 25Open List: c, gClose List: d, a, f, b, e
  • 15. Total Cost: 30Open List: gClose List: d, a, f, b, e, c
  • 16. Total Cost: 39Open List:Close List: d, a, f, b, e, c
  • 17. PSEUDO-CODE FOR PRIMS ALGORITHM Designate one node as the start node Add the start node to the priority queue of open nodes. WHILE (there are still nodes to be added to the closed list) { Remove a node from priority queue of open nodes, designate it as current node. IF (the current node is not already in the closed list) { IF the node is not the first node removed from the priority queue, add the minimal edge connecting it with a closed node to the minimal spanning tree. Add the current node to the closed list. FOR each successor of current node IF (the successor is not already in the closed list OR the successor is now connected to a closed node by an edge of lesser weight than before) Add that successor to the priority queue of open nodes; } }
  • 18. Sample C++ Implementation• void prim(graph &g, vert s) { • int minvertex(graph &g, int *d) { • int v;• int dist[g.num_nodes];• int vert[g.num_nodes]; • for (i = 0; i < g.num_nodes; i++) • if (g.is_marked(i, UNVISITED)) {• for (int i = 0; i < g.num_nodes; i++) { • v = i; break;• dist[i] = INFINITY; • }• dist[s.number()] = 0; • for (i = 0; i < g.num_nodes; i++) • if ((g.is_marked(i, UNVISITED)) && (dist[i] < dist[v])) v = i;• for (i = 0; i < g.num_nodes; i++) {• vert v = minvertex(g, dist); • return (v); • }• g.mark(v, VISITED);• if (v != s) add_edge_to_MST(vert[v], v);• if (dist[v] == INFINITY) return;• for (edge w = g.first_edge; g.is_edge(w), w = g.next_edge(w)) {• if (dist[g.first_vert(w)] = g.weight(w)) {• dist[g.second_vert(w)] = g.weight(w);• vert[g.second_vert(w)] = v;• }• }• }• }
  • 19. Complexity Analysis Minimum edge weight data Time complexity (total) structureadjacency matrix, searching O(V*V)binary heap and adjacency O((V + E) log(V)) = O(Elist log(V))Fibonacci heap and O(E + V log(V))adjacency list
  • 20. Application One practical application of a MST would be in the design of a network. For instance, a group of individuals, who are separated by varying distances, wish to be connected together in a telephone network. Because the cost between two terminal is different, if we want to reduce our expenses, Prims Algorithm is a way to solve it Connect all computers in a computer science building using least amount of cable. A less obvious application is that the minimum spanning tree can be used to approximately solve the traveling salesman problem. A convenient formal way of defining this problem is to find the shortest path that visits each point at least once. Another useful application of MST would be finding airline routes. The vertices of the graph would represent cities, and the edges would represent routes between the cities. Obviously, the further one has to travel, the more it will cost, so MST can be applied to optimize airline routes by finding the least costly paths with no cycles.


Recovering unallocated space of a USB flash drive.

Today I played with the latest SUSE Linux Live. I had not have a DVD drive and used USB flash drive instead. I wanted to reformat my flash drive, but suddenly found it that it had not been possible. The most of the disk space had been unallocated, and my Windows 8 did not allow me to use it.

Microsoft DiskPart version 6.2.9200
Copyright (C) 1999-2012 Microsoft Corporation.
On computer: COMPUTER

DISKPART> list disk
  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          298 GB      0 B
  Disk 1    Online         7509 MB  6619 MB

DISKPART> select disk 1
Disk 1 is now the selected disk.
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> exit

Thursday, 26 February 2015

  • . Trees and Graphs Trees, Binary Search Trees, Balanced Trees, Graphs
    • Svetlin Nakov
    • Telerik Corporation
    • www.telerik.com
  • 2. Table of Contents
    • Tree-like Data Structures
    • Trees and Related Terminology
    • Implementing Trees
    • Traversing Trees
    • Balanced Trees
    • Graphs
  • 3. Tree-like Data Structures Trees, Balanced Trees, Graphs, Networks
  • 4. Tree-like Data Structures
    • Tree-like data structures are
      • Branched recursive data structures
        • Consisting of nodes
        • Each node can be connected to other nodes
    • Examples of tree-like structures
      • Trees: binary / other degree, balanced, etc.
      • Graphs: directed / undirected, weighted, etc.
      • Networks
  • 5. Tree-like Data Structures Tree Graph 2 3 6 1 4 5 5 (20) 5 (10) 15 (15) 15 (30) 5 (5) 20(20) 10 (40) Network Project Manager Team Leader De-signer QA Team Leader Developer 1 Developer 2 Tester 1 Developer 3 Tester 2 7 19 21 14 1 12 31 4 11
  • 6. Trees and Related Terminology Node, Edge, Root, Children, Parent, Leaf , Binary Search Tree, Balanced Tree
  • 7. Trees
    • Tree data structure – terminology
      • Node, edge, root, child, children, siblings, parent, ancestor, descendant, predecessor, successor, internal node, leaf, depth, height, subtree
    Height = 2 Depth 0 Depth 1 Depth 2 17 15 14 9 6 5 8
  • 8. Binary Trees
    • Binary trees : most widespread form
      • Each node has at most 2 children
    10 17 15 9 6 5 8 Root node Left subtree Right child Right child Left child
  • 9. Binary Search Trees
    • Binary search trees are ordered
      • For each node x in the tree
        • All the elements of the left subtree of x are ≤ x
        • All the elements of the right subtree of x are > x
    • Binary search trees can be balanced
      • Balanced trees have height of ~ log 2 ( x )
      • Balanced trees have for each node nearly equal number of nodes in its subtrees
  • 10. Binary Search Trees (2)
    • Example of balanced binary search tree
    • If the tree is balanced, add / search / delete operations take approximately log( n ) steps
    17 19 9 6 12 25
  • 11. Implementing Trees Recursive Tree Data Structure
  • 12. Recursive Tree Definition
    • The recursive definition for tree data structure:
      • A single node is tree
      • Tree nodes can have zero or multiple children that are also trees
    • Tree node definition in C#
    public class TreeNode<T> { private T value; private List<TreeNode<T>> children; … } The value contained in the node List of child nodes, which are of the same type
  • 13. TreeNode<int> Structure TreeNode<int> int value List<TreeNode<int>> children 7 children 19 children 21 children 14 children 1 children 12 children 31 children 23 children 6 children
  • 14. Implementing TreeNode <T> public TreeNode(T value) { this.value = value; this.children = new List<TreeNode<T>>(); } public T Value { get { return this.value; } set { this.value = value; } } public void AddChild(TreeNode<T> child) { child.hasParent = true; this.children.Add(child); } public TreeNode<T> GetChild(int index) { return this.children[index]; }
  • 15.
    • The class Tree<T> keeps tree's root node
    Implementing Tree<T> public class Tree<T> { private TreeNode<T> root; public Tree(T value, params Tree<T>[] children): this(value) { foreach (Tree<T> child in children) { this.root.AddChild(child.root); } } public TreeNode<T> Root { get { return this.root; } } } Flexible constructor for building trees
  • 16. Building a Tree
    • Constructing tree by nested constructors:
    Tree<int> tree = new Tree<int>(7, new Tree<int>(19, new Tree<int>(1), new Tree<int>(12), new Tree<int>(31)), new Tree<int>(21), new Tree<int>(14, new Tree<int>(23), new Tree<int>(6)) ); 7 14 19 23 6 21 31 1 12
  • 17. Tree Traversals DFS and BFS Traversals
  • 18. Tree Traversal Algorithms
    • Traversing a tree means to visit each of its nodes exactly one in particular order
      • Many traversal algorithms are known
      • Depth-First Search (DFS)
        • Visit node's successors first
        • Usually implemented by recursion
      • Breadth-First Search (BFS)
        • Nearest nodes visited first
        • Implemented by a queue
  • 19.
    • Depth-First Search first visits all descendants of given node recursively, finally visits the node itself
    • DFS algorithm pseudo code
    Depth-First Search (DFS) DFS(node) { for each child c of node DFS( c ); print the current node; } 1 2 3 4 5 8 6 7 9 7 14 19 23 6 21 31 1 12
  • 20. DFS in Action (Step 1 )
    • Stack: 7
    • Output: (empty)
    7 14 19 23 6 21 31 1 12
  • 21. DFS in Action (Step 2 )
    • Stack: 7 , 19
    • Output: (empty)
    7 14 19 23 6 21 31 1 12
  • 22. DFS in Action (Step 3 )
    • Stack: 7 , 19 , 1
    • Output: (empty)
    7 14 19 23 6 21 31 1 12
  • 23. DFS in Action (Step 4 )
    • Stack: 7 , 19
    • Output: 1
    7 14 19 23 6 21 31 1 12
  • 24. DFS in Action (Step 5 )
    • Stack: 7 , 19 , 12
    • Output: 1
    7 14 19 23 6 21 31 1 12
  • 25. DFS in Action (Step 6 )
    • Stack: 7 , 19
    • Output: 1 , 12
    7 14 19 23 6 21 31 1 12
  • 26. DFS in Action (Step 7 )
    • Stack: 7 , 19 , 31
    • Output: 1 , 12
    7 14 19 23 6 21 31 1 12
  • 27. DFS in Action (Step 8 )
    • Stack: 7 , 19
    • Output: 1 , 12 , 31
    7 14 19 23 6 21 31 1 12
  • 28. DFS in Action (Step 9 )
    • Stack: 7
    • Output: 1 , 12 , 31 , 19
    7 14 19 23 6 21 31 1 12
  • 29. DFS in Action (Step 10 )
    • Stack: 7 , 21
    • Output: 1 , 12 , 31 , 19
    7 14 19 23 6 21 31 1 12
  • 30. DFS in Action (Step 11 )
    • Stack: 7
    • Output: 1 , 12 , 31 , 19 , 21
    7 14 19 23 6 21 31 1 12
  • 31. DFS in Action (Step 12 )
    • Stack: 7 , 14
    • Output: 1 , 12 , 31 , 19 , 21
    7 14 19 23 6 21 31 1 12
  • 32. DFS in Action (Step 13 )
    • Stack: 7 , 14 , 23
    • Output: 1 , 12 , 31 , 19 , 21
    7 14 19 23 6 21 31 1 12
  • 33. DFS in Action (Step 14 )
    • Stack: 7 , 14
    • Output: 1 , 12 , 31 , 19 , 21 , 23
    7 14 19 23 6 21 31 1 12
  • 34. DFS in Action (Step 15 )
    • Stack: 7 , 14 , 6
    • Output: 1 , 12 , 31 , 19 , 21 , 23
    7 14 19 23 6 21 31 1 12
  • 35. DFS in Action (Step 16 )
    • Stack: 7 , 14
    • Output: 1 , 12 , 31 , 19 , 21 , 23 , 6
    7 14 19 23 6 21 31 1 12
  • 36. DFS in Action (Step 17 )
    • Stack: 7
    • Output: 1 , 12 , 31 , 19 , 21 , 23 , 6 , 14
    7 14 19 23 6 21 31 1 12
  • 37. DFS in Action (Step 18 )
    • Stack: (empty)
    • Output: 1 , 12 , 31 , 19 , 21 , 23 , 6 , 14 , 7
    Traversal finished 7 14 19 23 6 21 31 1 12
  • 38.
    • Breadth-First Search first visits the neighbor nodes, later their neighbors, etc.
    • BFS algorithm pseudo code
    Breadth-First Search (BFS) BFS(node) { queue  node while queue not empty v  queue print v for each child c of v queue  c } 5 6 7 2 3 4 8 9 1 7 14 19 23 6 21 31 1 12
  • 39. BFS in Action (Step 1 )
    • Queue: 7
    • Output: 7
    7 14 19 23 6 21 31 1 12
  • 40. BFS in Action (Step 2 )
    • Queue: 7 , 19
    • Output: 7
    7 14 19 23 6 21 31 1 12
  • 41. BFS in Action (Step 3 )
    • Queue: 7 , 19 , 21
    • Output: 7
    7 14 19 23 6 21 31 1 12
  • 42. BFS in Action (Step 4 )
    • Queue: 7 , 19 , 21 , 14
    • Output: 7
    7 14 19 23 6 21 31 1 12
  • 43. BFS in Action (Step 5 )
    • Queue: 7 , 19 , 21 , 14
    • Output: 7 , 19
    7 14 19 23 6 21 31 1 12
  • 44. BFS in Action (Step 6 )
    • Queue: 7 , 19 , 21 , 14 , 1
    • Output: 7 , 19
    7 14 19 23 6 21 31 1 12
  • 45. BFS in Action (Step 7 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12
    • Output: 7 , 19
    7 14 19 23 6 21 31 1 12
  • 46. BFS in Action (Step 8 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31
    • Output: 7 , 19
    7 14 19 23 6 21 31 1 12
  • 47. BFS in Action (Step 9 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31
    • Output: 7 , 19 , 21
    7 14 19 23 6 21 31 1 12
  • 48. BFS in Action (Step 10 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31
    • Output: 7 , 19 , 21 , 14
    7 14 19 23 6 21 31 1 12
  • 49. BFS in Action (Step 11 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23
    • Output: 7 , 19 , 21 , 14
    7 14 19 23 6 21 31 1 12
  • 50. BFS in Action (Step 12 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14
    7 14 19 23 6 21 31 1 12
  • 51. BFS in Action (Step 13 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1
    7 14 19 23 6 21 31 1 12
  • 52. BFS in Action (Step 14 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1 , 1 2
    7 14 19 23 6 21 31 1 12
  • 53. BFS in Action (Step 15 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1 , 1 2 , 31
    7 14 19 23 6 21 31 1 12
  • 54. BFS in Action (Step 16 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1 , 1 2 , 31 , 23
    7 14 19 23 6 21 31 1 12
  • 55. BFS in Action (Step 16 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1 , 1 2 , 31 , 23 , 6
    7 14 19 23 6 21 31 1 12
  • 56. BFS in Action (Step 17 )
    • Queue: 7 , 19 , 21 , 14 , 1 , 12 , 31 , 23 , 6
    • Output: 7 , 19 , 21 , 14 , 1 , 1 2 , 31 , 23 , 6
    The queue is empty  stop 7 14 19 23 6 21 31 1 12
  • 57. Binary Trees DFS Traversals
    • DFS traversal of binary trees can be done in pre-order, in-order and post-order
      • Pre-order: left, root, right  6 , 9 , 12 , 17 , 19 , 25
      • In-order: root, left, right  17 , 9 , 6 , 12 , 19 , 25
      • Post-order: left, right, root  6 , 12 , 9 , 25 , 19 , 17
    17 19 9 6 12 25
  • 58. Iterative DFS and BFS
    • What will happen if in the Breadth-First Search (BFS) algorithm a stack is used instead of queue?
      • An iterative Depth-First Search (DFS) – in-order
    BFS(node) { queue  node while queue not empty v  queue print v for each child c of v queue  c } DFS(node) { stack  node while stack not empty v  stack print v for each child c of v stack  c }
  • 59. Trees and Traversals Live Demo
  • 60. Balanced Search Trees AVL Trees, B-Trees, Red-Black Trees, AA-Trees
  • 61. Balanced Binary Search Trees
    • Ordered Binary Trees ( Binary Search Trees )
      • For each node x the left subtree has values ≤ x and the right subtree has values > x
    • Balanced Trees
      • For each node its subtrees contain nearly equal number of nodes  nearly the same height
    • Balanced Binary Search Trees
      • Ordered binary search trees that have height of log 2 (n) where n is the number of their nodes
      • Searching costs about log 2 (n) comparisons
  • 62. Balanced Binary Search Tree – Example 33 18 15 24 3 17 20 29 54 42 60 37 43 59 85
  • 63. Balanced Binary Search Trees
    • Balanced binary search trees are hard to implement
      • Rebalancing the tree after insert / delete is complex
    • Well known implementations of balanced binary search trees
      • AVL trees – ideally balanced, very complex
      • Red-black trees – roughly balanced, more simple
      • AA-Trees – relatively simple to implement
    • Find / insert / delete operations need log 2 (n) steps
  • 64. B-Trees
    • B-trees are generalization of the concept of ordered binary search trees
      • B-tree of order d has between d and 2*d keys in a node and between d+1 and 2*d+1 child nodes
      • The keys in each node are ordered increasingly
      • All keys in a child node have values between their left and right parent keys
    • If the b-tree is balanced, its search / insert / add operations take about log(n) steps
    • B-trees can be efficiently stored on the disk
  • 65.
    • B-Tree of order 2 , also known as 2 - 3 - 4 -tree:
    B-Tree – Example 17 21 7 11 18 20 26 31 2 4 5 6 8 9 12 16 22 23 25 27 29 30 32 35
  • 66. Balanced Trees in .NET
    • .NET Framework has several built-in implementations of balanced search trees:
      • SortedDictionary<K,V>
        • Red-black tree based map of key-value pairs
      • OrderedSet<T>
        • Red-black tree based set of elements
    • External libraries like &quot;Wintellect Power Collections for .NET&quot; are more flexible
      • http://powercollections.codeplex.com
  • 67. Graphs Definitions, Representation, Traversal Algorithms
  • 68. Graph Data Structure
    • Set of nodes with many-to-many relationship between them is called graph
      • Each node has multiple predecessors
      • Each node has multiple successors
    Node with multiple predecessors Node with multiple successors 7 19 21 14 1 12 31 4 11
  • 69. Graph Definitions
    • Node ( vertex )
      • Element of graph
      • Can have name or value
      • Keeps a list of adjacent nodes
    • Edge
      • Connection between two nodes
      • Can be directed / undirected
      • Can be weighted / unweighted
      • Can have name / value
    A Node A Edge B
  • 70. Graph Definitions (2)
    • Directed graph
      • Edges have direction
    • Undirected graph
      • Undirected edges
    7 19 21 1 12 4 3 22 2 3 G J F D A E C H
  • 71. Graph Definitions (3)
    • Weighted graph
      • Weight (cost) is associated with each edge
    3 G J F D A E C H Q K N 10 4 14 6 16 9 8 7 5 22
  • 72. Graph Definitions (4)
    • Path (in undirected graph)
      • Sequence of nodes n 1 , n 2 , … n k
      • Edge exists between each pair of nodes n i , n i+1
      • Examples:
        • A, B, C is a path
        • H, K, C is not a path
    G C B A H N K
  • 73. Graph Definitions (5)
    • Path (in directed graph)
      • Sequence of nodes n 1 , n 2 , … n k
      • Directed edge exists between each pair of nodes n i , n i+1
      • Examples:
        • A, B, C is a path
        • A, G, K is not a path
    G C B A H N K
  • 74. Graph Definitions (6)
    • Cycle
      • Path that ends back at the starting node
      • Example:
        • A, B, C, G, A
    • Simple path
      • No cycles in path
    • Acyclic graph
      • Graph with no cycles
      • Acyclic undirected graphs are trees
    G C B A H N K
  • 75. Graph Definitions (8)
    • Two nodes are reachable if
      • Path exists between them
    • Connected graph
      • Every node is reachable from any other node
    Unconnected graph with two connected components G J F D A Connected graph G J F D A E C H
  • 76. Graphs and Their Applications
    • Graphs have many real-world applications
      • Modeling a computer network like Internet
        • Routes are simple paths in the network
      • Modeling a city map
        • Streets are edges, crossings are vertices
      • Social networks
        • People are nodes and their connections are edges
      • State machines
        • States are nodes, transitions are edges
  • 77. Representing Graphs
    • Adjacency list
      • Each node holds a list of its neighbors
    • Adjacency matrix
      • Each cell keeps whether and how tw
        o nodes are connected
    • Set of edges
    1 2 3 4 1 2 3 4 {1,2} {1,4} {2,3} {3,1} {4,2} 1  {2, 4} 2  {3} 3  {1} 4  {2} 0 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 2 4 1 3
  • 78. Representing Graphs in C# public class Graph { int[][] childNodes; public Graph(int[][] nodes) { this.childNodes = nodes; } } Graph g = new Graph(new int[][] { new int[] {3, 6}, // successors of vertice 0 new int[] {2, 3, 4, 5, 6}, // successors of vertice 1 new int[] {1, 4, 5}, // successors of vertice 2 new int[] {0, 1, 5}, // successors of vertice 3 new int[] {1, 2, 6}, // successors of vertice 4 new int[] {1, 2, 3}, // successors of vertice 5 new int[] {0, 1, 4} // successors of vertice 6 }); 0 6 4 1 5 2 3
  • 79. Graph Traversal Algorithms
    • Depth-First Search (DFS) and Breadth-First Search (BFS) can traverse graphs
      • Each vertex should be is visited at most once
    BFS( node ) { queue  node visited[ node ] = true while queue not empty v  queue print v for each child c of v if not visited[ c ] queue  c visited[ c ] = true } DFS( node ) { stack  node visited[ node ] = true while stack not empty v  stack print v for each child c of v if not visited[ c ] stack  c visited[ c ] = true }
  • 80. Recursive DFS Graph Traversal void TraverseDFSRecursive(node) { if (not visited[node]) { visited[node] = true print node foreach child node c of node { TraverseDFSRecursive( c ); } } } vois Main() { TraverseDFS(firstNode); }
  • 81. Graphs and Traversals Live Demo
  • 82. Summary
    • Trees are recursive data structure – node with set of children which are also nodes
    • Binary Search Trees are ordered binary trees
    • Balanced trees have weight of log(n)
    • Graphs are sets of nodes with many-to-many relationship between them
      • Can be directed/undirected, weighted / unweighted, connected / not connected, etc.
    • Tree / graph traversals can be done by Depth-First Search (DFS) and Breadth-First Search (BFS)
  • 83. Trees and Graphs
    • Questions?
    http://academy.telerik.com
  • 84. Exercises
    • Write a program to traverse the directory C:WINDOWS and all its subdirectories recursively and to display all files matching the mask *.exe . Use the class System.IO.Directory .
    • Define classes File { string name, int size } and Folder { string name, File[] files , Folder[] childFolders } and using them build a tree keeping all files and folders on the hard drive starting from C:WINDOWS . Implement a method that calculates the sum of the file sizes in given subtree of the tree and test it accordingly. Use recursive DFS traversal.
  • 85. Exercises (2)
    • Implement the recursive Depth-First-Search (DFS) traversal algorithm. Test it with the sample graph from the demonstrations.
    • Implement the queue-based Breath-First-Search (BFS) traversal algorithm. Test it with the sample graph from the demonstrations.
    • Write a program for finding all cycles in given undirected graph using recursive DFS.
    • Write a program for finding all connected components of given undirected graph. Use a sequence of DFS traversals.
  • 86. Exercises (3)
    • Write a program for finding the shortest path between two vertices in a weighted directed graph. Hint: Use the Dijkstra's algorithm .
    • We are given a set of N tasks that should be executed in a sequence. Some of the tasks depend on other tasks. We are given a list of tasks { t i , t j } where t j depends on the result of t i and should be executed after it. Write a program that arranges the tasks in a sequence so that each task depending on another task is executed after it. If such arrangement is impossible indicate this fact.
    • Example: { 1 , 2 }, { 2 , 5 }, { 2 , 4 }, { 3 , 1 }  3 , 1 , 2 , 5 , 4 
    • REF:http://www.introprogramming.info/english-intro-csharp-book/read-online/chapter-17-trees-and-graphs/#presentation-slides