In this assignment, you will work with probabilistic models known as Bayesian networks to efficiently calculate the answer to probability questions concerning discrete random variables. For example, write 'O(n^2)' for second-degree polynomial runtime. The outcome of each match is probabilistically proportional to the difference in skill level between the teams. The general idea of MH is to build an approximation of a latent probability distribution by repeatedly generating a "candidate" value for each random variable in the system, and then probabilistically accepting or rejecting the candidate value based on an underlying acceptance function. # Now you will implement the Metropolis-Hastings algorithm, which is another method for estimating a probability distribution. DO NOT CHANGE ANY FUNCTION HEADERS FROM THE NOTEBOOK. Why or why not? Windows 10; Windows 8.1; Microsoft Office # For the first sub-part, consider a smaller network with 3 teams : the Airheads, the Buffoons, and the Clods (A, B and C for short). # We want to ESTIMATE the outcome of the last match (T5vsT1), given prior knowledge of other 4 matches. # The general idea is to build an approximation of a latent probability distribution by repeatedly generating a "candidate" value for each random variable in the system, and then probabilistically accepting or rejecting the candidate value based on an underlying acceptance function. # Hint : Checkout example_inference.py under pbnt/combined, """Set probability distribution for each node in the power plant system. This is a competition for bonus points on Assignment 4 for the OMSCS 6601 Artificial Intelligence Class at Georgia Tech. If you have technical difficulties submitting the assignment to Canvas, post privately to Piazza immediately and attach your submission. For simplicity, we assume that the temperature is represented as either high or normal. """, # TODO: assign value to choice and factor. Check Hints 1 and 2 below, for more details. A simple task to wind down the assignment. For the main exercise, consider the following scenario. Y: CS 7641 - All the code. """Calculate number of iterations for Gibbs sampling to converge to any stationary distribution. If you wanted to set the following distribution for $P(A|G,T)$ to be, # dist = zeros([G_node.size(), T_node.size(), A.size()], dtype=float32), # A_distribution = ConditionalDiscreteDistribution(nodes=[G_node, T_node, A], table=dist). T1vsT2, T2vsT3,...,T4vsT5,T5vsT1. (Make sure to identify what makes it different from Metropolis-Hastings.). # To start, design a basic probabilistic model for the following system: # There's a nuclear power plant in which an alarm is supposed to ring when the core temperature, indicated by a gauge, exceeds a fixed threshold. note5.pdf Georgia Institute Of Technology Artificial Intelligence ... CS 6601_ Artificial Intelligence _ OMSCS _ Georgia Institute of Technology _ Atlanta, GA.pdf. The alarm is faulty 15% of the time. ", the marginal probability that the alarm sounds, the marginal probability that the gauge shows "hot", the probability that the temperature is actually hot, given that the alarm sounds and the alarm and gauge are both working. random.randint()) for the probabilistic choices that sampling makes. assignment_4 . # Note: Just measure how many iterations it takes for Gibbs to converge to a stable distribution over the posterior, regardless of how close to the actual posterior your approximations are. The method should just consist of a single iteration of the algorithm. ... assignment_1 . 13 pages. The following command will create a BayesNode with 2 values, an id of 0 and the name "alarm": You will use BayesNode.add_parent() and BayesNode.add_child() to connect nodes. This assignment is due on October 10th, 11:59PM UTC-12. # Estimate the likelihood of different outcomes for the 5 match (T5vT1) by running Gibbs sampling until it converges to a stationary distribution. """, # Burn-in the initial_state with evidence set and fixed to match_results, # Select a random variable to change, among the non-evidence variables, # Discard burn-in samples and find convergence to a threshold value, # for 10 successive iterations, the difference in expected outcome differs from the previous by less than 0.1, # Check for convergence in consecutive sample probabilities. For example, to connect the alarm and temperature nodes that you've already made (i.e. For instance, if Metropolis-Hastings takes twice as many iterations to converge as Gibbs sampling, you'd say that it converged faster by a factor of 2. # Here's an example of how to do inference for the marginal probability of the "faulty alarm" node being True (assuming "bayes_net" is your network): # F_A = bayes_net.get_node_by_name('faulty alarm'), # engine = JunctionTreeEngine(bayes_net), # index = Q.generate_index([True],range(Q.nDims)). For instance, when it is faulty, the alarm sounds 55% of the time that the gauge is "hot" and remains silent 55% of the time that the gauge is "normal.". README.md . Build a Bayes Net to represent the three teams and their influences on the match outcomes. # Assume that each team has the following prior distribution of skill levels: # In addition, assume that the differences in skill levels correspond to the following probabilities of winning: # | skill difference
(T2 - T1) | T1 wins | T2 wins| Tie |, # |------------|----------|---|:--------:|. # Fill in complexity_question() to answer, using big-O notation. If an initial value is not given, default to a state chosen uniformly at random from the possible states. For the most stationary convergence, delta should be very small. """, # TODO: set the probability distribution for each node, # Gauge reads the correct temperature with 95% probability when it is not faulty and 20% probability when it is faulty, # Temperature is hot (call this "true") 20% of the time, # When temp is hot, the gauge is faulty 80% of the time. 3 total matches are played. unknown skill level, represented as an integer from 0 to 3. 1 pages. To start, design a basic probabilistic model for the following system: There's a nuclear power plant in which an alarm is supposed to ring when the core temperature, indicated by a gauge, exceeds a fixed threshold. Readme Releases … """, 'Yes, because it can be decomposed into multiple sub-trees. You can also calculate the answers by hand to double-check. """Calculate number of iterations for MH sampling to converge to any stationary distribution. G = gauge reading (high = True, normal = False), T = actual temperature (high = True, normal = False). Return you name from the function aptly called return_your_name(). """, # If an initial value is not given, default to a state chosen uniformly at random from the possible states, # print "Randomized initial state: ", initial_value, # Update skill variable based on conditional joint probabilities, # skill_prob_num = team_table[initial_value[x]] * match_table[initial_value[x], initial_value[(x+1)%n], initial_value[x+n]] * match_table[initial_value[(x-1)%n], initial_value[x], initial_value[(x+(2*n)-1)%(2*n)]], # Update game result variable based on parent skills and match probabilities. Assignment 4 David Pash CS6250 - OMSCS Our first goal in this assignment was to replicate the results table in the TCP Fast Open paper of page load times with and without TFO, for some simple web pages. Log in; Home; Windows. A match is played between teams Ti and Ti+1 to give a total of 5 matches, i.e. # You can check your probability distributions with probability\_tests.probability\_setup\_test(). You can just use the probability distributions tables from the previous part. assuming that temperature affects the alarm probability): You can run probability_tests.network_setup_test() to make sure your network is set up correctly. # For the first sub-part, consider a smaller network with 3 teams : the Airheads, the Buffoons, and the Clods (A, B and C for short). WRITE YOUR CODE BELOW. You'll do this in Gibbs_sampler(), which takes a Bayesian network and initial state value as a parameter and returns a sample state drawn from the network's distribution. View code README.md CS6601 Artificial Intelligence. Implement the Gibbs sampling algorithm, which is a special case of Metropolis-Hastings. CS 6601: Artificial Intelligence. # You're done! If you wanted to set the following distribution for P(A|G,T) to be. Please submit your solution as probability_solution.py. Project Code for Computer Networking. Use the EnumerationEngine provided to perform inference- we'll be arriving at the same values by using sampling in the next section. Skip to content. # The key is to remember that 0 represents the index of the false probability, and 1 represents true. # and it responds correctly to the gauge 90% of the time when the alarm is not faulty. Sampling is a method for ESTIMATING a probability distribution when it is prohibitively expensive (even for inference!) assuming that temperature affects the alarm probability): # You can run probability\_tests.network\_setup\_test() to make sure your network is set up correctly. 88 Colin P. A robot is tasked with painting a ladder and. Here, we want to estimate the outcome of the matches, given prior knowledge of previous matches. Don't worry about the probabilities for now. # 5. # # Update skill variable based on conditional joint probabilities, # skill_prob[i] = team_table[i] * match_table[i, initial_value[(x+1)%n], initial_value[x+n]] * match_table[initial_value[(x-1)%n], i, initial_value[(2*n-1) if x==0 else (x+n-1)]], # skill_prob = skill_prob / normalize, # initial_value[x] = np.random.choice(4, p=skill_prob), # # Update game result variable based on parent skills and match probabilities, # result_prob = match_table[initial_value[x-n], initial_value[(x+1-n)%n], :], # initial_value[x] = np.random.choice(3, p=result_prob), # current_weight = A.dist.table[initial_value[0]]*A.dist.table[initial_value[1]]*A.dist.table[initial_value[2]] \, # *AvB.dist.table[initial_value[0]][initial_value[1]][initial_value[3]]\, # *AvB.dist.table[initial_value[1]][initial_value[2]][initial_value[4]]\, # *AvB.dist.table[initial_value[2]][initial_value[0]][initial_value[5]], # new_weight = A.dist.table[new_state[0]]*A.dist.table[new_state[1]]*A.dist.table[new_state[2]] \, # *AvB.dist.table[new_state[0]][new_state[1]][new_state[3]]\, # *AvB.dist.table[new_state[1]][new_state[2]][new_state[4]]\, # *AvB.dist.table[new_state[2]][new_state[0]][new_state[5]], # arbitrary initial state for the game system. it Cs7641 github Fall2016Midterm2 - CS 7641 CSE\\/ISYE 6740 Mid-term Exam 2(Fall 2016 Solutions Le Song 1 Probability and Bayes Rule[14 pts(a A probability density. Students should be familiar with college-level mathematical concepts (calculus, analytic geometry, linear algebra, and probability) and computer science concepts (algorithms, O notation, data structures). For the purpose of this assignment, we'll say that the sampler has converged when, for 10 successive iterations, the difference in expected outcome for the third match differs from the previous estimated outcome by less than .001 (0.1%). # Suppose that you know the following outcome of two of the three games: A beats B and A draws with C. Start by calculating the posterior distribution for the outcome of the BvC match in calculate_posterior(). # Is the network for the power plant system a polytree? Which algorithm converges more quickly? If you have an emergency and absolutely cannot submit an assignment by the posted deadlines, we ask # Alarm responds correctly to the gauge 55% of the time when the alarm is faulty. # You will test your implementation at the end of the section. # Now suppose you have 5 teams. GitHub Gist: instantly share code, notes, and snippets. Note: DO NOT USE the given inference engines to run the sampling method, since the whole point of sampling is to calculate marginals without running inference. Use the following Boolean variables in your implementation: # - G = gauge reading (high = True, normal = False), # - T = actual temperature (high = True, normal = False). Don't worry about the probabilities for now. The method should just perform a single iteration of the algorithm. Name the nodes as "A","B","C","AvB","BvC" and "CvA". # 4. """, # ('The marginal probability of sprinkler=false:', 0.80102921), #('The marginal probability of wetgrass=false | cloudy=False, rain=True:', 0.055). One way to do this is by returning the sample as a tuple. """. Artificial Intelligence Resources. For simplicity, say that the gauge's "true" value corresponds with its "hot" reading and "false" with its "normal" reading, so the gauge would have a 95% chance of returning "true" when the temperature is hot and it is not faulty. Using pbnt's Distribution class: if you wanted to set the distribution for P(A) to 70% true, 30% false, you would invoke the following commands: Use index 0 to represent FALSE or you may run into testing issues. Rather than using inference, we will do so by sampling the network using two Markov Chain Monte Carlo models: Gibbs sampling (2c) and Metropolis-Hastings (2d). of the BvC match given that A won against, B and tied C. Return a list of probabilities, corresponding to win, loss and tie likelihood. The temperature is hot (call this "true") 20% of the time. To finish up, you're going to perform inference on the network to calculate the following probabilities: You'll fill out the "get_prob" functions to calculate the probabilities. # "YOU WILL SCORE 0 POINTS ON THIS ASSIGNMENT IF YOU USE THE GIVEN INFERENCE ENGINES FOR THIS PART!! With just 3 teams (Part 2a, 2b). No products in the cart. Otherwise, the gauge is faulty 5% of the time. Now you will implement the Metropolis-Hastings algorithm, which is another method for estimating a probability distribution. You can access these by calling: which will return the same numpy array that you provided when constructing the probability distribution. If an initial value is not given, default to a state chosen uniformly at random from the possible states. The method should just consist of a single iteration of the algorithm. Assignment 3: Bayes Nets. # If you need to sanity-check to make sure you're doing inference correctly, you can run inference on one of the probabilities that we gave you in 1c. You can also calculate the answers by hand to double-check. These [slides](https://www.cs.cmu.edu/~scohen/psnlp-lecture6.pdf) provide a nice intro, and this [cheat sheet](http://www.bcs.rochester.edu/people/robbie/jacobslab/cheat_sheet/MetropolisHastingsSampling.pdf) provides an explanation of the details. """Complete a single iteration of the MH sampling algorithm given a Bayesian network and an initial state value. assignment_3 . The key is to remember that 0 represents the index of the false probability, and 1 represents true. One way to do this is by returning the sample as a tuple. First, work on a similar, smaller network! # To finish up, you're going to perform inference on the network to calculate the following probabilities: # - the marginal probability that the alarm sounds, # - the marginal probability that the gauge shows "hot", # - the probability that the temperature is actually hot, given that the alarm sounds and the alarm and gauge are both working. Contribute to dkilanga3/CS6601---Artificial-Intelligence development by creating an account on GitHub. In case of Gibbs, the returned state differs from the input state at at-most one variable (randomly chosen). By approximately what factor? Fill out the function below to create the net. 3 total matches are played. About. Use the following Boolean variables in your implementation: You will test your implementation at the end of the section. For instance, running inference on P(T=true) should return 0.19999994 (i.e. Now we are ready for the moment of truth. # You'll fill out the "get_prob" functions to calculate the probabilities. The temperature gauge reads the correct temperature with 95% probability when it is not faulty and 20% probability when it is faulty. Cannot retrieve contributors at this time. Each match is between two teams, and each team can either win, lose, or draw in a match. Assignment 3 Probabilistic Modeling.pdf. """Create a Bayes Net representation of the above power plant problem. CS 6601 AI Projects. For simplicity, say that the gauge's "true" value corresponds with its "hot" reading and "false" with its "normal" reading, so the gauge would have a 95% chance of returning "true" when the temperature is hot and it is not faulty. # Rather than using inference, we will do so by sampling the network using two [Markov Chain Monte Carlo](http://www.statistics.com/papers/LESSON1_Notes_MCMC.pdf) models: Gibbs sampling (2c) and Metropolis - Hastings sampling (3a). ', 'No, because its underlying undirected graph is not a tree. # 1d: Probability calculations : Perform inference. Assume the following variable conventions: # |AvB | the outcome of A vs. B
(0 = A wins, 1 = B wins, 2 = tie)|, # |BvC | the outcome of B vs. C
(0 = B wins, 1 = C wins, 2 = tie)|, # |CvA | the outcome of C vs. A
(0 = C wins, 1 = A wins, 2 = tie)|. # Which algorithm converges more quickly? Cannot retrieve contributors at this time, """Testing pbnt. # "YOU WILL SCORE 0 POINTS IF YOU USE THE GIVEN INFERENCE ENGINES FOR THIS PART!!". # 3. # Build a Bayes Net to represent the three teams and their influences on the match outcomes. Hint 1: In both Metropolis-Hastings and Gibbs sampling, you'll need access to each node's probability distribution and nodes. You don't necessarily need to create a new network. almost 20%). The alarm responds correctly to the gauge 55% of the time when the alarm is faulty, and it responds correctly to the gauge 90% of the time when the alarm is not faulty. # arbitrary initial state for the game system : # 5 for matches T1vT2,T2vT3,....,T4vT5,T5vT1. # Design a Bayesian network for this system, using pbnt to represent the nodes and conditional probability arcs connecting nodes. initial_value is a list of length 10 where: index 0-4: represent skills of teams T1, .. ,T5 (values lie in [0,3] inclusive), index 5-9: represent results of matches T1vT2,...,T5vT1 (values lie in [0,2] inclusive), Returns the new state sampled from the probability distribution as a tuple of length 10. We use essential cookies to perform essential website functions, e.g. The temperature gauge reads the correct temperature with 95% probability when it is not faulty and 20% probability when it is faulty. Hint 3: in order to count the sample states later on, you'll want to make sure the sample that you return is hashable. You need to generate the labels for the test dataset by using the training dataset and by using only classifier(s) composed of Random Forests or Decision trees. Given the same outcomes as in 2b, A beats B and A draws with C, you should now estimate the likelihood of different outcomes for the third match by running Gibbs sampling until it converges to a stationary distribution. # Hint 2: To use the AvB.dist.table (needed for joint probability calculations), you could do something like: # p = match_table[initial_value[x-n],initial_value[(x+1-n)%n],initial_value[x]], where n = 5 and x = 5,6,..,9. Fill in sampling_question() to answer both parts. Cs 7641 Github. Hint 2: you'll also want to use the random package (e.g. # Hint 1: in both Metropolis-Hastings and Gibbs sampling, you'll need access to each node's probability distribution and nodes.