It goes from left child then current node then right child

So for the given tree above it will be: `[2,17,7,19,3,100,25,36,1]`

public void inOrder(Node node){ if (node==null) return; inOrder(node.left); # repeat this on the left node visit(node); # do something at current node inOrder(node.right); # now work on right node }

It first deals with current node and goes from left child then right child. So the pre-order traversal is: `[100,19,17,2,7,3,36,25,1] `

public void preOrder(Node node){ if (node==null) return; visit(node); # do something at current node preOrder(node.left); # repeat this on the left node preOrder(node.right); # now work on right node }

It recursively visit left child then right child, and only then the current node in the tree. So for this: `[2,7,17,3,19,25,1,36,100] `

public void postOrder(Node node){ if (node==null) return; postOrder(node.left); # repeat this on the left node postOrder(node.right); # now work on right node visit(node); # do something at current node }

Level-order traversal basically is Breadth-First Search (BFS) traversal of a tree. For a BFS, a queue data structure is useful. See the queue implementation here

public ArrayList<Node> traverse(){ ArrayList<Node> closed = new ArrayList<>(); Queue<Node> open = new LinkedList<Node>(); open.add(this.startNode); while (open.isEmpty() == false) { Node currentNode = open.poll(); if (currentNode != null) { for (Node child : currentNode.getChildren()) { if (child != null) open.add(child); } closed.add(currentNode); } } return closed; }]]>

This is marked as a medium difficult problem. However if you know what in-order traversal does, it is a very simple problem. Here I just added a helper `traverse()`

method.

All the trick is done in the few lines inside the function. It checks if the current node is null, then it skips. Else it runs the same method recursively on the left child first, then it actually adds the current node’s value to the list called `result`

. Then it proceeds to the right child just like left one.

class Solution { public List<Integer> inorderTraversal(TreeNode root) { return traverse(root, new ArrayList<Integer>()); } private List<Integer> traverse(TreeNode node, List<Integer> result){ if (node==null) return result; traverse(node.left, result); result.add(node.val); traverse(node.right, result); return result; } }

This solution beats 100% of the previous solutions by runtime and 99.62% by memory! Wooho!

]]>- addToEnd or Enqueue
- removeFromFirst or dequeue
- peek
- isEmpty

You can find the implementation here in GitHub

package practiceJava; import java.util.NoSuchElementException; public class QueueImplementation<T> { private static class QueueNode<T>{ private T data; private QueueNode<T> next; public QueueNode(T data) { this.data = data; } } private QueueNode<T> first; private QueueNode<T> last; public void addToQueue(T data) { QueueNode<T> node = new QueueNode<T>(data); //add to end of queue if (this.last != null) { this.last.next = node; } last = node; //update last pointer //now check if queue was empty so this is the first one if (first == null) { first = last; } } public T removeFromFirst() { if (first == null) throw new NoSuchElementException(); T firstData = this.first.data; //return from first for FIFO //fix the first first = first.next; //now also fix the last pointer if (first==null) { this.last = null; } return firstData; } public T peek() { if (this.first == null) throw new NoSuchElementException(); return first.data; } }]]>

I propose these steps below.

**Data Exploration:**I will explore the data features, target variables and get an understanding of how they are related**Exploratory Data Analysis:**I will draw correlation plots, and find ranges, outliers from the data and do Exploratory Data Analysis**Feature Selection/Engineering:**I will identify, select/generate features for the models**Cross Validation, Hyper Parameter Tuning, and Building Models:**I will run cross-validation technique to search for best hyperparameters for each models**Model Selection, Performance Evaluation, and Delivery:**I will provide charts for each model’s performance graphically and deliver the code and report.

I would also recommend you split the Machine Learning project deliverables in 3 milestones.

**First Milestone: ** Step 1,2,3 above. **Second Milestone:** Step 4**Third Milestone:** Step 5

This is the first step. It basically means asking the right questions to understand the problem and what is being asked and what are the goals. Often times one may bring their own assumptions and biases and completely go off tangent from the initial problem.

You need to identify who is going to benefit from the product or service. It is critical to know how many people would be using it, and what are the demographics of the potential users.

This may be closely aligned with the user group identified above. It is not necessarily a sequential step from the above. Finding the pain points can also reveal more about the customers or vice versa.

Now you have the vague problems, what problem requires an immediate solution? Any problem or pain point that is feasible to attack? Is the problem worthy of the investment? How many people will be required? How much capital and how much technical effort is required for each one of those?

Once you have a prioritized list of problems, can you identify which problem you need to solve first?

Now let us talk about Design Thinking, can you suggest probable solutions to the problem you defined? Does another solution solve it in a different way? What are the drawbacks and compromises for each of the solutions? Would the scale?

It is recommended that you have a bunch of solutions and think critically through their strengths and weaknesses. So you have a fair chance of not falling into your own biases.

Remember, you need to come up with hypothetical solutions and let the market provide the validation for the hypothesis. You should not fall in love with your own imaginary solution. May be it is not as great in practice as you imagined in the first place. So be aware, and list a range of solutions!

Now as you went through the thought process, can you rank the solutions and identify which one to implement? What are the assumptions again? What is the target audience or customer group? What are their pain points?

Just recap your thoughts so you have a mental map of the whole area you just explored in your mind.

Finally, this is not just a useful framework for Product Management, but also a great critical thinking framework in general.

]]>- Listens effectively and coaches others
- Encourages others to take measured risk
- Takes blame of the subordinates when they take risk but attributes credit to the subordinates
- Explain the why, gives a clear expectation of the result along with the timeline, provides resources to achieve the result
- Shows empathy
- Communicates well and often

- Unfair to others when expectation is not met
- Takes decision without much clarity
- Does not communicate well

Managing people is significantly different from working as an individual contributor.

**Achieve through others instead of doing all by yourself**. You need to build your team’s capability to achieve something together. So there is a little opportunity for you to achieve something by your individual effort. This still takes a lot of energy and work. Effective management can enable you to achieve a synergy by collaboration.**Set up the objectives and key results:**OKR’s can be an effective way to set your team up for success. You can set some qualitative and also quantitative measure of success. The OKR’s need to be consistent with intra-team, inter-team, and personal goals and objectives.**Know-your-people**: This is extremely important since only through knowing people you can help them make their individual lives better and help them feel accomplished. Your people are humans too with their own set of unique aspirations and issues. If you help them overcome their challenges, you get a friend for life. A loyal employee can be a true asset. So set regular 1 on 1 meetings with them, know their problems and aspirations, let them speak, coach them in order to help them identify their own solutions, and identify how to help them reach their goals.**Communicate using a well-chosen vocabulary**: Give frequent positive feedback frequently, tell them that you noticed their good work, give them a little bit more challenge and help them progress.**Ask for feedback and learn from mistakes:**Seek feedback from your peers, levels above, and direct reports. The earlier you get the genuine feedback and correct your way, the more chances of success for you.**Stay resilient:**Your action, behavior, verbal, and non-verbal gestures convey to others whether you are confident or anxious about a situation. Do not get too optimistic or pessimistic for a situation unless you have a strong vision.**Spend your energy as if it is a precious resource:**Prioritize what matters most: health, exercise, eating, and sleep. Work may creep up to occupy your weekends. Working long-hours are not sustainable. Try out the Eisenhower Matrix to help prioritize for your long term success.

You need to to understand the skills, weakness, leadership style, goals of your manager.

Below are copied text from Franklin Covey resources for notes:

]]>1. Make a list of colleagues whose work links to yours, and focus on building or strengthening key relationships.

2.

Spend 15 minutes brainstorming and writing down a list of colleagues whose work could impact yours.

3. Consider which relationships to build4.

Be proactive in sharing mutually beneficial information.

5. Actively seek feedback about 7your ideas and projects — but only if you genuinely want the input.

MDP is a mathematical framework to model discrete-time stochastic systems. MDP consists of some States (S), Actions (A) to be taken while at each state, Transitions (T) from a state, s, to another state, s’, by taking an action with a probability T(s,a,s’). At each state, the agent who is interacting with the system, collects a reward R(s), or R(s,a), R(s,a,s’), depending on the stochasticity of the system.

A MDP is solved when the best policy, π(s) is identified for every state. Policy means the action the agent decides to take when at a state so the probability of the total reward in the first equation below is maximized as shown in the second equation below.

Policy Iteration (PI) algorithm needs the T and R matrices. It starts with a policy to start with and updates the Value function as in Eq(1) and then finds the best policy from Eq(2) until the new and previous policies are the same. This is a straightforward iterative process.

Value Iteration algorithm

VI also needs T and R matrices. But it can bypass the need to find the policy in each iteration. It initializes a random V(s) matrix with random Bellman function value for each state. Then for each state it finds the best action and updates the V(s) until all the V(s) are stable. This way it can quickly find the best policies for each state that stabilizes the V-matrix, instead of having to explicitly find the policy and then compute V(s) as in PI. It might be faster than PI for same iteration but may often take more iterations.

Unlike PI and VI, QL is realistic that often the actual T and R matrices are not known in a real world problem. Instead an agent can only interact with the environment and try to update its impression about the system by updating its T and R matrix.

In QL, at each state, the agent takes a random action or the best action that maximizes the V(s). This is known as EXPLORATION vs EXPLOITATION controlled by a based on a parameter ε which determines whether the agent chooses to randomly explore a new action or take the best action from what is has seen**. **The agent takes random actions for probability ε and greedy action for probability (1-ε) If it does not explore in the beginning then it keeps taking the same decision based on its first action and gets stuck. So initially, more exploration is useful to make a rich experience of the environment. As the agent explores and takes random actions over and over, it can gradually become confident about its experience and may not need to explore as much. So, the parameter ε can be reduced gradually.

The Learning Rate parameter 0< helps update the previous Q(s,a) value with the new value. Lower value makes convergence slow and too high of a alpha value makes it fluctuate the V(s) a lot.

**Bellman Equation: **https://en.m.wikipedia.org/wiki/Markov_decision_process

**Policy Iteration and Value Iteration: **Mihaela van der Schaar, Reinforcement Learning: A brief introduction

**Q-Learning**: Sargur N. Srihari, University of Buffalo

After creating a new database in Hive, you only need Hive ranger policy to allow reading the tables in the new database from Hive/Beeline/Beeline-Ranger.

But when reading the Hive table from Spark, it also needs a HDFS permission policy, in addition to the Hive ranger policy as above.

For internal/managed ACID tables, use

hive.executeQuery("SELECT * FROM DB.TABLE")

For external non-ACID tables, use this below instead of over `hive.executeQuery() `

to get 10x performance increase

spark.sql("SELECT * FROM DB.TABLE")

Now if you are wondering if a table is managed or external, you can run this below in Hive/Beeline/Beeline-ranger which tells you if a table is external or managed table

DESCRIBE FORMATTED db.table_name;

It should show the information regarding the table. Check the Table Type value it should either say Table Type: `MANAGED_TABLE`

or `EXTERNAL_TABLE `

So you would need to calculate loss of data due to this reduction in data size.

# data has this shape: row, col = 4898, 11 random_projection = SparseRandomProjection(n_components=5) random_projection.fit(data) components = random_projection.components_.toarray() # shape=(5, 11) p_inverse = np.linalg.pinv(components.T) # shape=(5, 11) #now get the transformed data using the projection components reduced_data = random_projection.transform(data) #shape=(4898, 5) reconstructed= reduced_data.dot(p_inverse) #shape=(4898, 11) assert reduced_data.shape == reconstructed.shape error = mean_squared_error(data, reconstructed)]]>