\textit{dfs}(\textit{pos} + 1, \textit{rest} - i \times \textit{freq}[\textit{pos}][0]) dfs(pos+1,rest−i×freq[pos][0])
即我们选择了这个数 ii 次。这里 ii 不能大于这个数出现的次数,并且 i \times \textit{freq}[\textit{pos}][0]i×freq[pos][0] 也不能大于 \textit{rest}rest。同时,我们需要将 ii 个 \textit{freq}[\textit{pos}][0]freq[pos][0] 放入列表中。
public: void dfs(int pos, int rest) { if (rest == 0) { ans.push_back(sequence); return; } if (pos == freq.size() || rest < freq[pos].first) { return; }
dfs(pos + 1, rest);
int most = min(rest / freq[pos].first, freq[pos].second); for (int i = 1; i <= most; ++i) { sequence.push_back(freq[pos].first); dfs(pos + 1, rest - i * freq[pos].first); } for (int i = 1; i <= most; ++i) { sequence.pop_back(); } }
vector<vector<int>> combinationSum2(vector<int>& candidates, int target) { sort(candidates.begin(), candidates.end()); for (int num: candidates) { if (freq.empty() || num != freq.back().first) { freq.emplace_back(num, 1); } else { ++freq.back().second; } } dfs(0, target); return ans; } };
result has {""} candiate is "abc" generate three strings "" + "a", ""+"b", ""+"c" and put into tmp, tmp = {"a", "b","c"} swap result and tmp (swap does not take memory copy) Now result has {"a", "b", "c"} Stage 2 for number "2":
result has {"a", "b", "c"} candidate is "def" generate nine strings and put into tmp, "a" + "d", "a"+"e", "a"+"f", "b" + "d", "b"+"e", "b"+"f", "c" + "d", "c"+"e", "c"+"f" so tmp has {"ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" } swap result and tmp Now result has {"ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" } Stage 3 for number "3":
result has {"ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" } candidate is "ghi" generate 27 strings and put into tmp, add "g" for each of "ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" add "h" for each of "ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" add "h" for each of "ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf" so, tmp has {"adg", "aeg", "afg", "bdg", "beg", "bfg", "cdg", "ceg", "cfg" "adh", "aeh", "afh", "bdh", "beh", "bfh", "cdh", "ceh", "cfh" "adi", "aei", "afi", "bdi", "bei", "bfi", "cdi", "cei", "cfi" } swap result and tmp Now result has {"adg", "aeg", "afg", "bdg", "beg", "bfg", "cdg", "ceg", "cfg" "adh", "aeh", "afh", "bdh", "beh", "bfh", "cdh", "ceh", "cfh" "adi", "aei", "afi", "bdi", "bei", "bfi", "cdi", "cei", "cfi" } Finally, return result.
int findPeakElement(vector<int>& nums) { int left = 0, right = nums.size() - 1; while (left < right ) { int mid = left + (right - left) / 2; if (nums[mid] < nums[mid + 1]) { left = mid+1; } else { right = mid+1; } } return left; }
Time Complexity: O(logn) | due to binary search using while loop. Space Complexity: O(1) | as only 4 variables are initialized at the beginning. Which is constant irrespective of given input.
long long s=0, e=x, ans, mid; //long long due to some of test cases overflows integer limit. while(s<=e){ mid=(s+e)/2; if(mid*mid==x) return mid; //if the 'mid' value ever gives the result, we simply return it. else if(mid*mid<x){ s=mid+1; //if 'mid' value encounterted gives lower result, we simply discard all the values lower than mid. ans=mid; //an extra pointer 'ans' is maintained to keep track of only lowest 'mid' value. } else e=mid-1; //if 'mid' value encountered gives greater result, we simply discard all the values greater than mid. } return ans;
Overview Finding the Duplicate Number is a classic problem, and as such there are many different ways to approach it; a total of 7 approaches are presented here. The first 4 approaches involve rearranging or modifying elements of the array, and hence do not meet the constraints specified in the problem statement. However, they are included here since they are more feasible to come up with as the first approach in an interview setting. Since each approach is independent of the other approaches, they can be read in any order.
Proof Proving that at least one duplicate must exist in numsnums is an application of the pigeonhole principle. Here, each number in numsnums is a "pigeon" and each distinct number that can appear in numsnums is a "pigeonhole." Because there are n+1n+1 numbers and nn distinct possible numbers, the pigeonhole principle implies that if you were to put each of the n + 1n+1 pigeons into nn pigeonholes, at least one of the pigeonholes would have 2 or more pigeons.
Approach 1: Sort Note: This approach modifies individual elements and does not use constant space, and hence does not meet the problem constraints. However, it utilizes a fundamental concept that can help solve similar problems.
Intuition
In an unsorted array, duplicate elements may be scattered across the array. However, in a sorted array, duplicate numbers will be next to each other.
Algorithm
Sort the input array (numsnums).
Iterate through the array, comparing the current number to the previous number (i.e. compare nums[i]nums[i] to nums[i - 1]nums[i−1] where i > 0i>0).
Return the first number that is equal to its predecessor. Complexity Analysis
Time Complexity: O(n \log n)O(nlogn)
Sorting takes O(n \log n)O(nlogn) time. This is followed by a linear scan, resulting in a total of O(n \log n)O(nlogn) + O(n)O(n) = O(n \log n)O(nlogn) time.
Space Complexity: O(\log n)O(logn) or O(n)O(n)
The space complexity of the sorting algorithm depends on the implementation of each programming language:
In Java, Arrays.sort() for primitives is implemented using a variant of the Quick Sort algorithm, which has a space complexity of O(\log n)O(logn) In C++, the sort() function provided by STL uses a hybrid of Quick Sort, Heap Sort and Insertion Sort, with a worst case space complexity of O(\log n)O(logn) In Python, the sort() function is implemented using the Timsort algorithm, which has a worst-case space complexity of O(n)O(n) Approach 2: Set Note: This approach does not use constant space, and hence does not meet the problem constraints. However, it utilizes a fundamental concept that can help solve similar problems.
Intuition
As we traverse the array, we need a way to "remember" values that we've seen. If we come across a number that we've seen before, we've found the duplicate. An efficient way to record the seen values is by adding each number to a set as we iterate over the numsnums array.
Algorithm
In order to achieve linear time complexity, we need to be able to insert elements into a data structure and look them up in constant time. A HashSet/unordered_set is well suited for this purpose. Initialize an empty hashset, seenseen.
Iterate over the array and first check if the current element exists in the hashset (seenseen).
If it does exist in the hashset, that number is the duplicate and can be returned right away. Otherwise, insert the current element into seenseen, move to the next element in the array and repeat step 2.
Complexity Analysis
Time Complexity: O(n)O(n)
HashSet insertions and lookups have amortized constant time complexities. Hence, this algorithm requires linear time, since it consists of a single for loop that iterates over each element, looking up the element and inserting it into the set at most once.
Space Complexity: O(n)O(n)
We use a set that may need to store at most nn elements, leading to a linear space complexity of O(n)O(n).
class Solution { public: int findDuplicate(vector<int>& nums) { int n = nums.size(); int l = 1, r = n - 1, ans = -1; while (l <= r) { int mid = (l + r) >> 1; int cnt = 0; for (int i = 0; i < n; ++i) { cnt += nums[i] <= mid; } if (cnt <= mid) { l = mid + 1; } else { r = mid - 1; ans = mid; } } return ans; } };
class Solution { public: int findDuplicate(vector<int>& nums) { int n = nums.size(), ans = 0; // 确定二进制下最高位是多少 int bit_max = 31; while (!((n - 1) >> bit_max)) { bit_max -= 1; } for (int bit = 0; bit <= bit_max; ++bit) { int x = 0, y = 0; for (int i = 0; i < n; ++i) { if (nums[i] & (1 << bit)) { x += 1; } if (i >= 1 && (i & (1 << bit))) { y += 1; } } if (x > y) { ans |= 1 << bit; } } return ans; } };
Solution3 快慢指针
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
class Solution { public: int findDuplicate(vector<int>& nums) { int slow = 0, fast = 0; do { slow = nums[slow]; fast = nums[nums[fast]]; } while (slow != fast); slow = 0; while (slow != fast) { slow = nums[slow]; fast = nums[fast]; } return slow; } };
Since the robot can only move right and down, when it arrives at a point, it either arrives from left or above. If we use dp[i][j] for the number of unique paths to arrive at the point (i, j), then the state equation is dp[i][j] = dp[i][j - 1] + dp[i - 1][j]. Moreover, we have the base cases dp[0][j] = dp[i][0] = 1 for all valid i and j.
class Solution { public: int uniquePaths(int m, int n) { vector<vector<int>> dp(m, vector<int>(n, 1)); for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) { dp[i][j] = dp[i - 1][j] + dp[i][j - 1]; } } return dp[m - 1][n - 1]; } }; The above solution runs in O(m * n) time and costs O(m * n) space. However, you may have noticed that each time when we update dp[i][j], we only need dp[i - 1][j] (at the previous row) and dp[i][j - 1] (at the current row). So we can reduce the memory usage to just two rows (O(n)).
class Solution { public: int uniquePaths(int m, int n) { vector<int> pre(n, 1), cur(n, 1); for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) { cur[j] = pre[j] + cur[j - 1]; } swap(pre, cur); } return pre[n - 1]; } }; Further inspecting the above code, pre[j] is just the cur[j] before the update. So we can further reduce the memory usage to one row.
class Solution { public: int uniquePaths(int m, int n) { vector<int> cur(n, 1); for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) { cur[j] += cur[j - 1]; } } return cur[n - 1]; } }; Now, you may wonder whether we can further reduce the memory usage to just O(1) space since the above code seems to use only two variables (cur[j] and cur[j - 1]). However, since the whole row cur needs to be updated for m - 1 times (the outer loop) based on old values, all of its values need to be saved and thus O(1)-space is impossible. However, if you are having a DP problem without the outer loop and just the inner one, then it will be possible.
Solution1 排列组合
Solution2 动态规划
我们令 dp[i][j] 是到达 i, j 最多路径
动态方程:dp[i][j] = dp[i-1][j] + dp[i][j-1]
注意,对于第一行 dp[0][j],或者第一列 dp[i][0],由于都是在边界,所以只能为 1
时间复杂度:O(m*n)O(m∗n)
空间复杂度:O(m * n)O(m∗n)
优化:因为我们每次只需要 dp[i-1][j],dp[i][j-1]
所以我们只要记录这两个数,直接看代码吧!
思路二:
1 2 3 4 5 6 7 8 9 10 11 12 13
class Solution { public int uniquePaths(int m, int n) { int[][] dp = new int[m][n]; for (int i = 0; i < n; i++) dp[0][i] = 1; for (int i = 0; i < m; i++) dp[i][0] = 1; for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) { dp[i][j] = dp[i - 1][j] + dp[i][j - 1]; } } return dp[m - 1][n - 1]; } }
优化1:空间复杂度 O(2n)O(2n)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
class Solution { public int uniquePaths(int m, int n) { int[] pre = new int[n]; int[] cur = new int[n]; Arrays.fill(pre, 1); Arrays.fill(cur,1); for (int i = 1; i < m;i++){ for (int j = 1; j < n; j++){ cur[j] = cur[j-1] + pre[j]; } pre = cur.clone(); } return pre[n-1]; } }
优化2:空间复杂度 O(n)O(n)
1 2 3 4 5 6 7 8 9 10 11 12
class Solution { public int uniquePaths(int m, int n) { int[] cur = new int[n]; Arrays.fill(cur,1); for (int i = 1; i < m;i++){ for (int j = 1; j < n; j++){ cur[j] += cur[j-1] ; } } return cur[n-1]; } }
his is a typical DP problem. Suppose the minimum path sum of arriving at point (i, j) is S[i][j], then the state equation is S[i][j] = min(S[i - 1][j], S[i][j - 1]) + grid[i][j].
Well, some boundary conditions need to be handled. The boundary conditions happen on the topmost row (S[i - 1][j] does not exist) and the leftmost column (S[i][j - 1] does not exist). Suppose grid is like [1, 1, 1, 1], then the minimum sum to arrive at each point is simply an accumulation of previous points and the result is [1, 2, 3, 4].
Now we can write down the following (unoptimized) code. class Solution { public: int minPathSum(vector<vector<int>>& grid) { int m = grid.size(); int n = grid[0].size(); vector<vector<int> > sum(m, vector<int>(n, grid[0][0])); for (int i = 1; i < m; i++) sum[i][0] = sum[i - 1][0] + grid[i][0]; for (int j = 1; j < n; j++) sum[0][j] = sum[0][j - 1] + grid[0][j]; for (int i = 1; i < m; i++) for (int j = 1; j < n; j++) sum[i][j] = min(sum[i - 1][j], sum[i][j - 1]) + grid[i][j]; return sum[m - 1][n - 1]; } }; As can be seen, each time when we update sum[i][j], we only need sum[i - 1][j] (at the current column) and sum[i][j - 1] (at the left column). So we need not maintain the full m*n matrix. Maintaining two columns is enough and now we have the following code.
The problem seems to be a dynamic programming one. Hint: the tag also suggests that! Here are the steps to get the solution incrementally.
Base cases: if n <= 0, then the number of ways should be zero. if n == 1, then there is only way to climb the stair. if n == 2, then there are two ways to climb the stairs. One solution is one step by another; the other one is two steps at one time.
The key intuition to solve the problem is that given a number of stairs n, if we know the number ways to get to the points [n-1] and [n-2] respectively, denoted as n1 and n2 , then the total ways to get to the point [n] is n1 + n2. Because from the [n-1] point, we can take one single step to reach [n]. And from the [n-2] point, we could take two steps to get there.
The solutions calculated by the above approach are complete and non-redundant. The two solution sets (n1 and n2) cover all the possible cases on how the final step is taken. And there would be NO overlapping among the final solutions constructed from these two solution sets, because they differ in the final step.
Now given the above intuition, one can construct an array where each node stores the solution for each number n. Or if we look at it closer, it is clear that this is basically a fibonacci number, with the starting numbers as 1 and 2, instead of 1 and 1.
Solution1 递归
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
class Solution { public: int climbStairs(int n) { int dp0=1; int dp1=1; for(int i=2;i<=n;i++){ int tmp=dp0; dp0=dp1; dp1=tmp+dp1; } return dp1; } };
{ int lsf = Integer.MAX_VALUE; // least so far int op = 0; // overall profit int pist = 0; // profit if sold today for(int i = 0; i < prices.length; i++){ if(prices[i] < lsf){ // if we found new buy value which is more smaller then previous one lsf = prices[i]; // update our least so far } pist = prices[i] - lsf; // calculating profit if sold today by, Buy - sell if(op < pist){ // if pist is more then our previous overall profit op = pist; // update overall profit } } return op; // return op
Solution0 暴力解法
1 2 3 4 5 6 7 8 9 10 11 12 13
class Solution { public: int maxProfit(vector<int>& prices) { int n = (int)prices.size(), ans = 0; for (int i = 0; i < n; ++i){ for (int j = i + 1; j < n; ++j) { ans = max(ans, prices[j] - prices[i]); } } return ans; } };
class Solution { public int maxProfit(int[] prices) { if(prices.length <= 1) return 0; int min = prices[0], max = 0; for(int i = 1; i < prices.length; i++) { max = Math.max(max, prices[i] - min); min = Math.min(min, prices[i]); } return max; } }
class Solution { public int maxProfit(int[] prices) { int [] dp = new int [prices.length+1];//这里从第0天开始,到底i天 dp[0] = 0;//第0天没有股票,最大利润为0 dp[1] = 0;//第一天只能买,不能买,因此最大利润也是0 for(int i = 1;i<prices.length;i++){ int A =dp[i]+prices[i]-prices[i-1];//第一种选择 int B = dp[i];//第二种选择 dp[i+1] = A>B ? A : B;//i从0开始,所以dp[I+1]是当前天数 } return dp[prices.length];
We use a boolean vector dp[]. dp[i] is set to true if a valid word (word sequence) ends there. The optimization is to look from current position i back and only substring and do dictionary look up in case the preceding position j with dp[j] == true is found.
Solution1 动态规划
1 2 3 4 5 6 7 8 9 10 11 12 13 14
bool wordBreak(string s, vector<string>& wordDict) { vector<bool> dp(s.size()+1, false); unordered_set<string> m(wordDict.begin(), wordDict.end()); dp[0] = true; for (int i = 1; i <= s.size(); ++i){ for (int j = 0; j < i; ++j){ if (dp[j] && m.find(s.substr(j, i-j)) != m.end()){ dp[i] = true; break; } } } return dp[s.size()]; }
There can be Mutliple ways to frame the solution once we get the intuition right !! So, what should the intuition be ? Let's discuss that out !
Let's consider array to have no 0s (for the moment)...... So, on what factor does the answer depends now ?? It surely depends on the count of negative numbers in the array !!
There are 2 possibilities - either the count of -ve numbers is even or odd.... --->>>
If the count is even, then obviously we would want to include all of them(in fact the whole array) to maximise the product. As multiplying an even number of -ve numbers would make the result +ve.
If the count is odd, then we would want to exclude one -ve number from our product, so that the product gets maximised. So, now the question is, which -ve number to exclude? Example ---> arr={-2,-3,-1,-4,-5} which number should be excluded ? On observing it , we should get one fact clear, that the number which is going to get ignored is either going to be the first one or the last one.
Note that, we cannot exclude a -ve number that is not the first or the last, because, if we do so, we will need to exclude all(because you are breaking the product at this point) other -ve nums following that -ve number and then that needn't result in the maximum product. Having said all that, now the question is whether to exclude the first -ve num or the last -ve num in the array. We can only know the answer by trying both. So, firstly we will take the product from the beginning of the array and we will include the first -ve number and will leave out the last one !! And will do the vice-versa for checking the other scenario !! So , in that example we would leave the first -ve number... (-2 and then total_product will be product of rest of the numbers in array) or we would leave the last number...(-5) ... And maximum of those 2 cases will be the answer !! Now, what if array has zeroes? Well, it changes nothing much to be honest, we can consider the part on both the side of 0 as the subarrays and the maximum product that way will be the max(subarray1_ans, subarray2_ans) .... And how to mark the division point ? How do we seperate the subarrays????... Thats pretty simple and we have done it in kadane's algo, just make the curr_ongoing_prod=1 !! And maintain one maxm_prod variable seperately ....
Example -->>> arr={-2,1,4,5,0,-3,4,6,1,-2} .... so we can consider subarray1={-2,1,4,5} and subarray2={-3,4,6,-2} and then the max_ans(subarray1,subarray2) will be our answer !!
This is a classic 1D-DP problem where at every step we have a choice to make ... So the first and foremost thing in any DP problem is to find the reccurence relation !! At every ith house robber has 2 options: a) rob current house i. b) don't rob current house.
In case he is robbing the (i)th house, the money he can get till the i-th house == money robbed till (i-2)th house + money robbed at (i)th house....let's say total money robbed in this case equals to X. In case he is not robbing, money robbed till i-th house==money robbed till (i-1)th house...lets say total money robbed in this case equals to Y. So , the maxm money he gets till i-th house is the max(X,Y). Example of case (a) --> nums={2,3,2} ... Here, the robber will rob the house at index-2 as nums[index-2] + nums[index-0] > nums[index-1] Example of case (b)--> nums={2,7,3} ... here maximum money robbed till index-2 will not be equal to nums[index-2] + nums[index-0]... as nums[index-1] is greater than the sum of money at both those houses ...
We can achieve the desired solution to this problem via mutliple ways, let's start with the simpler ones and then will look forward to optimize the Time and Space Complexities
Simple Recursion Time Complexcity : O ( 2^n ) Gives us TLE Space Complexcity : O( 1 )
Solution1 动态规划
dp 方程 dp[i] = max(dp[i-2]+nums[i], dp[i-1])
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
class Solution { public: int rob(vector<int>& nums) { if (nums.empty()) { return 0; } int size = nums.size(); if (size == 1) { return nums[0]; } vector<int> dp = vector<int>(size, 0); dp[0] = nums[0]; dp[1] = max(nums[0], nums[1]); for (int i = 2; i < size; i++) { dp[i] = max(dp[i - 2] + nums[i], dp[i - 1]); } return dp[size - 1]; } };
class Solution { public: int rob(vector<int>& nums) { if (nums.empty()) { return 0; } int size = nums.size(); if (size == 1) { return nums[0]; } int first = nums[0], second = max(nums[0], nums[1]); for (int i = 2; i < size; i++) { int temp = second; second = max(first + nums[i], second); first = temp; } return second; } };
Approach 1: Brute Force The simplest approach consists of trying to find out every possible square of 1’s that can be formed from within the matrix. The question now is – how to go for it?
We use a variable to contain the size of the largest square found so far and another variable to store the size of the current, both initialized to 0. Starting from the left uppermost point in the matrix, we search for a 1. No operation needs to be done for a 0. Whenever a 1 is found, we try to find out the largest square that can be formed including that 1. For this, we move diagonally (right and downwards), i.e. we increment the row index and column index temporarily and then check whether all the elements of that row and column are 1 or not. If all the elements happen to be 1, we move diagonally further as previously. If even one element turns out to be 0, we stop this diagonal movement and update the size of the largest square. Now we, continue the traversal of the matrix from the element next to the initial 1 found, till all the elements of the matrix have been traversed.
Complexity Analysis
Time complexity : O((mn)^2) In worst case, we need to traverse the complete matrix for every 1. Space complexity : O(1)O(1). No extra space is used.
Solution0 暴力法
Solution1 动态规划
class Solution { public int maximalSquare(char[][] matrix) { /** dp[i][j]表示以第i行第j列为右下角所能构成的最大正方形边长, 则递推式为: dp[i][j] = 1 + min(dp[i-1][j-1], dp[i-1][j], dp[i][j-1]); **/ int m = matrix.length; if(m < 1) return 0; int n = matrix[0].length; int max = 0; int[][] dp = new int[m+1][n+1];
for(int i = 1; i <= m; ++i) {
for(int j = 1; j <= n; ++j) {
if(matrix[i-1][j-1] == '1') {
dp[i][j] = 1 + Math.min(dp[i-1][j-1], Math.min(dp[i-1][j], dp[i][j-1]));
max = Math.max(max, dp[i][j]);
}
}
}
return max*max;
}
Idea: This question is very similar to coin change https://leetcode.com/problems/coin-change/discuss/1104203/C%2B%2B-Super-Simple-and-Short-Dynamic-Programming-Solution. The only difference is that in coin change we get a vector of coins and here we know that the coins are all the perfect squares. So our first step will be to construct a "coin" vector. Then, we do it the same way as coin change.
class Solution { public: // 判断是否为完全平方数 bool isPerfectSquare(int x) { int y = sqrt(x); return y * y == x; }
// 判断是否能表示为 4^k*(8m+7) bool checkAnswer4(int x) { while (x % 4 == 0) { x /= 4; } return x % 8 == 7; }
int numSquares(int n) { if (isPerfectSquare(n)) { return 1; } if (checkAnswer4(n)) { return 4; } for (int i = 1; i * i <= n; i++) { int j = n - i * i; if (isPerfectSquare(j)) { return 2; } } return 3; } };
This is a classic Dynamic Programming problem. Let dp[i] is the longest increase subsequence of nums[0..i] which has nums[i] as the end element of the subsequence.
Complexity
Time: O(N^2), where N <= 2500 is the number of elements in array nums. Space: O(N) ✔️ Solution 2: Greedy with Binary Search
Let's construct the idea from following example. Consider the example nums = [2, 6, 8, 3, 4, 5, 1], let's try to build the increasing subsequences starting with an empty one: sub1 = []. Let pick the first element, sub1 = [2]. 6 is greater than previous number, sub1 = [2, 6] 8 is greater than previous number, sub1 = [2, 6, 8] 3 is less than previous number, we can't extend the subsequence sub1, but we must keep 3 because in the future there may have the longest subsequence start with [2, 3], sub1 = [2, 6, 8], sub2 = [2, 3]. With 4, we can't extend sub1, but we can extend sub2, so sub1 = [2, 6, 8], sub2 = [2, 3, 4]. With 5, we can't extend sub1, but we can extend sub2, so sub1 = [2, 6, 8], sub2 = [2, 3, 4, 5]. With 1, we can't extend neighter sub1 nor sub2, but we need to keep 1, so sub1 = [2, 6, 8], sub2 = [2, 3, 4, 5], sub3 = [1]. Finally, length of longest increase subsequence = len(sub2) = 4. In the above steps, we need to keep different sub arrays (sub1, sub2..., subk) which causes poor performance. But we notice that we can just keep one sub array, when new number x is not greater than the last element of the subsequence sub, we do binary search to find the smallest element >= x in sub, and replace with number x. Let's run that example nums = [2, 6, 8, 3, 4, 5, 1] again: Let pick the first element, sub = [2]. 6 is greater than previous number, sub = [2, 6] 8 is greater than previous number, sub = [2, 6, 8] 3 is less than previous number, so we can't extend the subsequence sub. We need to find the smallest number >= 3 in sub, it's 6. Then we overwrite it, now sub = [2, 3, 8]. 4 is less than previous number, so we can't extend the subsequence sub. We overwrite 8 by 4, so sub = [2, 3, 4]. 5 is greater than previous number, sub = [2, 3, 4, 5]. 1 is less than previous number, so we can't extend the subsequence sub. We overwrite 2 by 1, so sub = [1, 3, 4, 5]. Finally, length of longest increase subsequence = len(sub) = 4.
Complexity
Time: O(N * logN), where N <= 2500 is the number of elements in array nums. Space: O(N), we can achieve O(1) in space by overwriting values of sub into original nums array.
ugly number k sorted list 1 2 7 13 19 1 * [2,7,13,19] | | | | | 2 4 14 26 38 2 * [2,7,13,19] | | | | | 4 8 28 52 76 4 * [2,7,13,19] | | | | | 7 14 49 91 133 7 * [2,7,13,19] | | | | | 8 16 56 ... ... 8 * [2,7,13,19] | | | | | . . . . . . . . . . . . . . . We can see that each prime number in primes[] form a sorted list, and now our job is to merge them and find the nth minimum.
Here we don't have the next pointer for each node to trace the next potential candidate. But as we can see in the graph, we can make use of the ugly number we have produced so far!
Solution1 最小堆
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
class Solution { public int nthSuperUglyNumber(int n, int[] primes) { PriorityQueue<Long>queue=new PriorityQueue<>(); long res=1; for(int i=1;i<n;i++){ for(int prime:primes){ queue.add(prime*res); } res=queue.poll(); while(!queue.isEmpty()&&res==queue.peek()) queue.poll(); } return (int)res; } }
If you carefully observe the below 3 codes. You will see that the DP Memoization is dervied from the Recursion code just by changing 3 lines and the DP Tabulation is derived from the DP Memoization.
Recursion Time: O(2^n) Space: O(n)
Writing a recursive function is all about find two things:
The base case: Just calculate the output for the smallest possible input The choice diagram: For any given input, just see what all choices do we have.
DP Memoization Time: O(n.m) Space: O(n.m)
In the above recursive case, we were doing repeated work in the form of subproblems. Hence we store the results of those subproblems in a table to reduce the number of recursive calls.
DP Tabulation Time: O(n.m) Space: O(n.m)
We have reached the best conceivable run time for this question but since we have recursive calls in the previous algorithm. It might lead to stackoverflow error in the worst case when recursive calls are a lot. Hence we want to totally emit the notion of recursions. To do that, we simply convert the recursion into iterative code.
The below code is bottom up dynamic programming because we are starting from the first element in the 2D array and filling the DP from this first element till the last element. And eventually, the last cell stores our final result.
Solution1 动态规划
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
class Solution { public: int coinChange(vector<int>& coins, int amount) { int Max = amount + 1; vector<int> dp(amount + 1, Max); dp[0] = 0; for (int i = 1; i <= amount; ++i) { for (int j = 0; j < (int)coins.size(); ++j) { if (coins[j] <= i) { dp[i] = min(dp[i], dp[i - coins[j]] + 1); } } } return dp[amount] > amount ? -1 : dp[amount]; } };
void coinChange(vector<int>& coins, int amount, int c_index, int count, int& ans) { if (amount == 0) { ans = min(ans, count); return; } if (c_index == coins.size()) return;
for (int k = amount / coins[c_index]; k >= 0 && k + count < ans; k--) { coinChange(coins, amount - k * coins[c_index], c_index + 1, count + k, ans); } }
int coinChange(vector<int>& coins, int amount) { if (amount == 0) return 0; sort(coins.rbegin(), coins.rend()); int ans = INT_MAX; coinChange(coins, amount, 0, 0, ans); return ans == INT_MAX ? -1 : ans; }
At first glance, the problem exhibits the feature of "optimal substructure": if we want to rob maximum amount of money from current binary tree (rooted at root), we surely hope that we can do the same to its left and right subtrees.
So going along this line, let's define the function rob(root) which will return the maximum amount of money that we can rob for the binary tree rooted at root; the key now is to construct the solution to the original problem from solutions to its subproblems, i.e., how to get rob(root) from rob(root.left), rob(root.right), ... etc.
Apparently the analyses above suggest a recursive solution. And for recursion, it's always worthwhile figuring out the following two properties:
Termination condition: when do we know the answer to rob(root) without any calculation? Of course when the tree is empty ---- we've got nothing to rob so the amount of money is zero.
Recurrence relation: i.e., how to get rob(root) from rob(root.left), rob(root.right), ... etc. From the point of view of the tree root, there are only two scenarios at the end: root is robbed or is not. If it is, due to the constraint that "we cannot rob any two directly-linked houses", the next level of subtrees that are available would be the four "grandchild-subtrees" (root.left.left, root.left.right, root.right.left, root.right.right). However if root is not robbed, the next level of available subtrees would just be the two "child-subtrees" (root.left, root.right). We only need to choose the scenario which yields the larger amount of money.
However the solution runs very slowly (1186 ms) and barely got accepted (the time complexity turns out to be exponential, see my comments below).
Step II -- Think one step further
In step I, we only considered the aspect of "optimal substructure", but think little about the possibilities of overlapping of the subproblems. For example, to obtain rob(root), we need rob(root.left), rob(root.right), rob(root.left.left), rob(root.left.right), rob(root.right.left), rob(root.right.right); but to get rob(root.left), we also need rob(root.left.left), rob(root.left.right), similarly for rob(root.right). The naive solution above computed these subproblems repeatedly, which resulted in bad time performance. Now if you recall the two conditions for dynamic programming (DP): "optimal substructure" + "overlapping of subproblems", we actually have a DP problem. A naive way to implement DP here is to use a hash map to record the results for visited subtrees.
Step III -- Think one step back
In step I, we defined our problem as rob(root), which will yield the maximum amount of money that can be robbed of the binary tree rooted at root. This leads to the DP problem summarized in step II.
Now let's take one step back and ask why we have overlapping subproblems. If you trace all the way back to the beginning, you'll find the answer lies in the way how we have defined rob(root). As I mentioned, for each tree root, there are two scenarios: it is robbed or is not. rob(root) does not distinguish between these two cases, so "information is lost as the recursion goes deeper and deeper", which results in repeated subproblems.
If we were able to maintain the information about the two scenarios for each tree root, let's see how it plays out. Redefine rob(root) as a new function which will return an array of two elements, the first element of which denotes the maximum amount of money that can be robbed if root is not robbed, while the second element signifies the maximum amount of money robbed if it is robbed.
Let's relate rob(root) to rob(root.left) and rob(root.right)..., etc. For the 1st element of rob(root), we only need to sum up the larger elements of rob(root.left) and rob(root.right), respectively, since root is not robbed and we are free to rob its left and right subtrees. For the 2nd element of rob(root), however, we only need to add up the 1st elements of rob(root.left) and rob(root.right), respectively, plus the value robbed from root itself, since in this case it's guaranteed that we cannot rob the nodes of root.left and root.right.
As you can see, by keeping track of the information of both scenarios, we decoupled the subproblems and the solution essentially boiled down to a greedy one. Here is the program:
Let's try solving it using brute-force approach. We need to partition the array into two subsets. This means that for each element of the array, we can either place it in 1st subset, or place it in 2nd subset.
Since we are only concerned with the sums of subset being equal, we will maintain 1st subset's sum: sum1 & 2nd subset's sum: sum2. For each element, we try both possible options of either placing it in 1st subset and increasing sum1 or placing it in 2nd subset & increasing sum2. Finally, once we reach the end of array, we can check if the current placements gave equal sum. If none of the possible placements give equal sum, we will return false.
We can slightly optimize the above approach by observing that equal partion are only possible when the total sum of array can be equally split, i.e, it is even. This effectively allows us to directly return false if the sum is odd. When the sum is even, we only need to check if we can construct one subset with its sum equal to total_sum / 2 (the other will automatically have the same sum, so we dont need to care about it). Thus the above can be optimized to -
Time Complexity : O(2N), where N is the number of elements in nums. For each element, we try both choices of including or excluding an element from subset leading to recursive branches 2*2*2..N times which give time complexity of O(2N) Space Complexity : O(N), required by recursive stack
✔️ Solution - II (Dynamic Programming - Memoization)
The above solution times out because we were performing repeated calculations over and over unnecessarily. The result for a given parameters sum, i (can we achieve subset sum = sum starting from i index?) will always be the same. So once we have calculated it, we dont need to repea the whole calculation again when it is called from another recursive branch. Instead we can save the result for this state and return it whenever we called again.
Thus, we can use dynamic programming here. We use a dp array where dp[i][sum] denotes whether subset-sum = sum can be achieved or not starting from the ith index.
Initially all elements in dp are initialized to -1 denoting that we have not computed that state If dp[i][sum] == 1 means that we can achieve sum starting from ith index If dp[i][sum] == 0 means we cant achieve that sum starting from the ith index
Time Complexity : O(N*sum), where N is the number of elements in nums & sum is the sum of all elements in nums. Space Complexity : O(N*sum), required by dp
✔️ Solution - III (Optimized Dynamic Programming - Memoization)
I am not sure if my reasoning is correct for this approach or it passes due to weak test cases. All other solutions I have seen are 2D memo. I initially thought this approach would fail but seems like it passes (tried on other OJs too).
We can further optimize the above memoization approach by reducing the state that we memoize. We used dp[i][sum] in the above approach to denote if sum can be achieved starting from ith element in nums. But we dont really care if we achieve the sum starting from i index or not. We are only concerned with whether we can achieve it or not. Thus, we can reduce the state down to 1D dp where dp[sum] denotes whether sum is possible to be achived from nums or not.
It is essential that we 1st recurse by choosing the current element and only then try the branch of not choosing. This prevents the recursive function from going all the way down the recursion tree by not choosing any elements and incorrectly marking sums as not achievable which could have been achievable if we had chosen earlier elements that we skipped.
I am not sure if this is best explanation but if someone can better explain or provide some kind of proof/reasoning as to why recursing by 1st picking the element & then not picking will guarantee correct answer, do comment below.
Time Complexity : O(N*sum)
Check the below image for illustration (Credits: @SanjayMarreddi) Space Complexity : O(sum), required by dp
✔️ Solution - IV (Dynamic Programming - Tabulation)
We can convert the dp approach to iterative version. Here we will again use dp array, where dp[sum] will denote whether sum is achievable or not. Initially, we have dp[0] = true since a 0 sum is always achievable. Then for each element num, we will iterate & find if it is possible to form a sum j by adding num to some previously formable sum.
One thing to note that it is essential to iterate from right to left in the below inner loop to avoid marking multiple sum, say j1 as achievable and then again using that result to mark another bigger sum j2 (j2=j1+num) as achievable. This would be wrong since it would mean choosing num multiple times. So we start from right to left to avoid overwriting previous results updated in the current loop.
Time Complexity : O(N*sum) Space Complexity : O(sum)
✔️ Solution - V (Dynamic Programming using bitmask)
We can use bitmasking to condense the inner loop of previous approach into a single bit-shift operation. Here we will use bitset in C++ consisting of sum number of bits (other language can use bigInt or whatever support is provided for such operations).
Each bit in bitset (dp[i]) will denote whether sum i is possible or not. Now, when we get a new number num, it can be added to every sum already possible, i.e, every dp[i] bit which is already 1. This operation can be performed using bit shift as dp << num. How? See the following example
Suppose current dp = 1011 This means sums 0, 1 and 3 are possible to achieve. Let the next number we get: num = 5. Now we can achieve (0, 1 & 3) which were already possible and (5, 6, 8) which are new sum after adding 'num=5' to previous sums
1. 'dp << num': This operation will add num to every bit . 3 2 1 0 8 7 6 5 4 3 2 1 0 So, dp = 1 0 1 1 will be transformed to dp = 1 0 1 1 0 0 0 0 0 (after 5 shifts to left) Note that new dp now denotes 5, 6, 8 which are the new sums possible. We will combine it with previous sums using '|' operation 8 7 6 5 4 3 2 1 0 2. 'dp | dp << num' = 1 0 1 1 0 1 0 1 1
And now we have every possible sum after combining new num with previous possible sums. Finally, we will return dp[halfSum] denoting whether half sum is achievable or not.
Time Complexity : O(N*sum) Space Complexity : O(sum)
Approach #1: Brute Force with Initial Character Map [Time Limit Exceeded] Intuition and Algorithm
In a typical brute force, for all starting indices i of A and j of B, we will check for the longest matching subarray A[i:i+k] == B[j:j+k] of length k. This would look roughly like the following psuedocode:
ans = 0 for i in [0 .. A.length - 1]: for j in [0 .. B.length - 1]: k = 0 while (A[i+k] == B[j+k]): k += 1 #and i+k < A.length etc. ans = max(ans, k) Our insight is that in typical cases, most of the time A[i] != B[j]. We could instead keep a hashmap Bstarts[A[i]] = all j such that B[j] == A[i], and only loop through those in our j loop.
If there is a length k subarray common to A and B, then there is a length j <= k subarray as well.
Let check(length) be the answer to the question "Is there a subarray with length length, common to A and B?" This is a function with range that must take the form [True, True, ..., True, False, False, ..., False] with at least one True. We can binary search on this function.
Algorithm
Focusing on the binary search, our invariant is that check(hi) will always be False. We'll start with hi = min(len(A), len(B)) + 1; clearly check(hi) is False.
Now we perform our check in the midpoint mi of lo and hi. When it is possible, then lo = mi + 1, and when it isn't, hi = mi. This maintains the invariant. At the end of our binary search, hi == lo and lo is the lowest value such that check(lo) is False, so we want lo - 1.
As for the check itself, we can naively check whether any A[i:i+k] == B[j:j+k] using set structures.
Approach #3: Dynamic Programming [Accepted] Intuition and Algorithm
Since a common subarray of A and B must start at some A[i] and B[j], let dp[i][j] be the longest common prefix of A[i:] and B[j:]. Whenever A[i] == B[j], we know dp[i][j] = dp[i+1][j+1] + 1. Also, the answer is max(dp[i][j]) over all i, j.
We can perform bottom-up dynamic programming to find the answer based on this recurrence. Our loop invariant is that the answer is already calculated correctly and stored in dp for any larger i, j.
We have to return the total number of continuous subarrays whose sum equals to k. Let's take an example not given in the question by taking negative numbers Suppose our arr is arr[]: [-1, -1, 1] && k = 0
So, the answer should be '1' as their is only one subarray whose sum is 0 i.e (-1 + 1) Solution - I (Brute force, TLE)-
Since we are very obedient person and don't want to do anything extra from our side. So, we will try to generate the sum of each subarray and if matches withk , then increment our answer. Like, this is the most basic thing we can do. Time Complexity --> O(n ^ 2) // where n is the size of the array Space Complexity --> O(1) // we are not using anything extra from our side It paases [ 85 / 89 ] in built test cases
n == nums.length 1 <= n <= 104 0 <= nums[i] <= n nums 中的所有数字都 独一无二
1 2
The basic idea is to use XOR operation. We all know that a^b^b =a, which means two xor operations with the same number will eliminate the number and reveal the original number. In this solution, I apply XOR operation to both the index and value of the array. In a complete array with no missing numbers, the index and value should be perfectly corresponding( nums[index] = index), so in a missing array, what left finally is the missing number.
Now we can not use the + operator, which means it is obvious that we have to use some sort of bit manupulation. But the real question is how? The ans lies within the procedure of the addition, which I am going to show you below.
from binary level, how do we exactly add two numbers, let us see -->
_ a = 2 = 010 b = 3 = 011 ------------ c = 5 = 101 In the position, where we have a dash above, there we are generating a carry, which will be carried over to the next bit, and added to that next bit. So, the addition pattern goes like -
0 0 1 1 + 0 +1 + 0 + 1 ---- ---- ---- ---- 0 1 1 0 (with a carry 1) Is this pattern similar to you? Have you seen this in the XOR table? Let's see the XOR table quickly -
a b | XOR - - | - - 0 0 | 0 0 1 | 1 1 0 | 1 1 1 | 0 These are the exact same, hence for addition, we need to use the XOR operator. But what to do with the carry? Hey, we need to add that carry to the next bit, right? That is what we have seen in the implementation of the addition as well. We will do that only, but we can NOT use addition anyway. But, before that, let's do the XOR for 2 and 3 example.
-- Doing only XOR -- a = 2 = 010 b = 3 = 011 ------------ c = 1 = 001 1 is NOT our answer, and in this procedure, we have left the carry out, which is 100. Now what's the pattern for finding the carry? It is after we AND the two numbers, we will LEFT-SHIFT the result by 1. Didn't get it?
-- Doing only AND -- a = 2 = 010 b = 3 = 011 ------------ c = 2 = 010 ----------- Doing Left-Shift by 1 (<<1) ----------- c = 4 = 100 So, we found out the carry as well, and believe me or not, but it is the entire Algorithm. You have to repeat the steps of 1. XOR and 2. AND with Left-Shift, until the step no 2. becomes 0, and you will have your answer.
Example -
a = 2 = 010 b = 3 = 011 ----------- x = 1 = 001 = a c = 4 = 100 = b ----------- x = 5 = 101 c = 0 = 000
x = XOR & c = AND with Left-Shift Since carry becomes 0, hence our answer is the XOR result = 5.
Below is the working code for the same, and you can run this code to find the desired answer.
class Solution { public int getSum(int a, int b) { while(b != 0){ int temp = (a&b)<<1; a = a ^ b; b = temp; } return a; } } Time Complexity: O(1) Space Complexity: O(1)
Time is O(1), because the max and min bounds are 1000 and -1000 respectively, which means the input will NOT be arbitrarily large, and it will be in the limits, hence the time will be constant.
The code for c and c++ will be very similar, and python will be a little different. If you like this approach, then please give me a thumbs up.
Solution I: We use XOR bitwise operatoin to get all the bits that are set either in x or in y, not both. Then we count the number of such bits and we're done!
class Solution { public: int hammingDistance(int x, int y) { int res = 0; int num = x^y; while (num) { res += num % 2; num >>= 1; } return res; } }; Solution II - Without XOR: We iterate x and y in parallel. If (x % 2 != y % 2) - only one of the rightmost bits are set - we add one to res.
class Solution { public: int hammingDistance(int x, int y) { int res = 0; while (x || y) { res += (x % 2 != y % 2); x >>= 1, y >>= 1; } return res; } };
We can simply iterate and find the leftmost bit (MSB) that is set. From that point onwards, we flip every bit till we reach the rightmost bit (LSB). To flip a bit, we can use ^ 1 (XOR 1) operation. This will flip a bit to 0 if it is set and flip to 1 if it is unset.
C++
class Solution { public: int findComplement(int num) { int i = 31; while((num & 1 << i) == 0) i--; // skip the left 0 bits till we reach the 1st set bit from left while(i >= 0) num ^= 1 << i--; // flip all bits by XORing with 1 return num; } }; Python
class Solution: def findComplement(self, num): i = 31 while (num & 1 << i) == 0: i -= 1 while i >= 0: num ^= 1 << i i -= 1 return num Time Complexity : O(N), where N is the number of bits. In this case, since we are starting from i=31, it should be constant but I am denoting time complexity of this approach as generalized O(N) Space Complexity : O(1)
✔️ Solution - II (Bit-Manipulation Tricks)
The above method basically finds the leftmost set bit and XORs the remaining bits with 1. A more efficient way to do the same would be to simply XOR num with another number having all bits to the right of nums's 1st set bit as 1 and rest bit to left as 0. This would achieve the same thing as above.
For eg. num = 13 => num = 13 (1101) => mask = 15 (1111) -------------------- ^ 2 (0010) We got all the bits flipped But how to get mask?
A simple way would be to initialize mask = 0 and keep flipping bits of it to 1 starting from the rightmost bit (LSB) till we reach the leftmost set bit of num. Now, how do we know that we reached the leftmost set bit in num?
We use another variable tmp = num. Each time, we will rightshift tmp essentially removing the rightmost bit. When we remove the leftmost set bit from it, it will become 0. Thus, we loop till tmp is not 0. C++
class Solution { public: int findComplement(int num) { int mask = 0; // all bits in mask are initially 0 for(int tmp = num; tmp; tmp >>= 1) // set bits in mask to 1 till we reach leftmost set bit in num mask = (mask << 1) | 1; // leftshifts and sets the rightmost bit to 1 return mask ^ num; // finally XORing with mask will flip all bits in num } }; Python
class Solution: def findComplement(self, num): mask, tmp = 0, num while tmp: mask = (mask << 1) | 1 tmp >>= 1 return mask ^ num Another way to do the same would be to the other way around and start with mask with all bits as 1. Then we keep setting rightmost bits in mask to 0 one by one till there are no common set bits left in num and mask (num & mask == 0 when no common set bits are present).
For eg. num = 13 (1101) and mask = -1 (111...1111) [all 32 set bits are set in -1 binary representation] 1. num = 000...1101 mask = 111...1110 => we still have common set bits 2. num = 000...1101 mask = 111...1100 => we still have common set bits 3. num = 000...1101 mask = 111...1000 => we still have common set bits 4. num = 000...1101 mask = 111...0000 => no more common bits left Now what?
Now we can simply flip all bits in mask (using ~ operator). It will now have all rightmost bits set to one starting from leftmost set bit in num, i.e, we now have the same mask that we had in previous approach. Now, we can XOR it with num and we get the flipped result C++
class Solution { public: int findComplement(int num) { uint mask = -1; // -1 is represented in binary as all bits set to 1 while(mask & num) mask <<= 1; // remove rightmost bits 1 by 1 till no common bits are left return ~mask ^ num; // XORs with 1 & flips all bits in num starting from the leftmost set bit } }; Python
class Solution: def findComplement(self, num): mask = -1 while mask & num: mask <<= 1 return ~mask ^ num This approach used 1 less operation inside loop (1st approach used 3 operations: right-shift >> on tmp, left-shift << on mask and | with 1 to set rightmost bit. This one uses 2: & to check if there are common bits in mask and num and left-shift << to remove the right set bits one by one)
Time Complexity : O(P) / O(log num), where P is the position of leftmost set bit in num. O(P) ~ O(log2(num)) Space Complexity : O(1)
✔️ Solution - III (Flip Bits from Right to Left)
We can start from the right-end and flip bits one by one. This is somewhat similar to 1st solution in above appraoch. But here we will not use mask but rather a bit starting with i=1 and just directly XOR it with num, then left-shift it & repeat thus flipping the bits in num one by one.
So, how do we know when to stop? We stop as soon i > num. This denotes that we have passed the leftmost set bit in num and thus we have flipped all the bits that needed to be flipped.
C++
class Solution { public: int findComplement(int num) { int i = 1; while(i <= num) num ^= i, i <<= 1; return num; } }; Python
class Solution(): def findComplement(self, num): i = 1 while i <= num: num ^= i i <<= 1 return num Time Complexity : O(P) / O(log num) Space Complexity : O(1)
The basic idea is that we can construct the result of n node tree just from the result of n-1 node tree. Here's how we do it: only 2 conditions: 1) The nth node is the new root, so newroot->left = oldroot; 2) the nth node is not root, we traverse the old tree, every time the node in the old tree has a right child, we can perform: old node->right = nth node, nth node ->left = right child; and when we reach the end of the tree, don't forget we can also add the nth node here. One thing to notice is that every time we push a TreeNode in our result, I push the clone version of the root, and I recover what I do to the old node immediately. This is because you may use the old tree for several times.
For the recursive solution, we set a lower bound and a upper bound for the tree. When we recurse on the left subtree, the upper bound becomes the value of its root. When we recurse on the right subtree, the lower bound becomes the value of its root.
class Solution { public: bool solve(TreeNode * r1, TreeNode * r2) { // See the tree diagram are r1 and r2 null ? No, so this line dont execute if(r1 == NULL && r2 == NULL) return true; // Is any one of r1 or r2 null ? Or are these values different ? No. Both values are // same so this else if wont execute either else if(r1 == NULL || r2 == NULL || r1->val != r2->val) return false; // Now comes the main part, we are calling 2 seperate function calls return solve(r1->left, r2->right) && solve(r1->right, r2->left); // First solve() before && will execute // r1->left is 3 and r2->right = 3 // Both values are same , they will by pass both if and else if statement // Now again r1->left is null and r2->right is null // So they will return true from first if condtion // Now the scene is : we have executed first solve() before && and it has // returned us True so expression becomes ' return true && solve() ' // Now solve after && will execute // Similarly it will check for 4 and 4 , it will by pass if else statements // next time both will become null, so will return true // Thus 2nd solve() at the end will also hold true // and we know 'true && true' is true // so true will be returned to caller, and thus tree is mirror of itself. // Similarly you can check for any testcase, flow of execution will remain same. } bool isSymmetric(TreeNode* root) { // Imagine a tree: 1 // 2 2 // 3 4 4 3 // We are standing on root that is 1, function begins // and now r1 and r2 points to 2 and 2 respectively. return solve(root->left, root->right); } };
回想下递归的实现: 当两个子树的根节点相等时,就比较: 左子树的 left 和 右子树的 right,这个比较是用递归实现的。 现在我们改用队列来实现,思路如下: 首先从队列中拿出两个节点(left 和 right)比较 将 left 的 left 节点和 right 的 right 节点放入队列 将 left 的 right 节点和 right 的 left 节点放入队列 时间复杂度是 O(n)O(n),空间复杂度是 O(n)O(n) 动画演示如下:
Well, since the head pointer may also be modified, we create a pre that points to it to facilitate the reverse process.
For the example list 1 -> 2 -> 3 -> 4 -> 5 in the problem statement, it will become 0 -> 1 -> 2 -> 3 -> 4 -> 5 (we init pre -> val to be 0). We also set a pointer cur to head. Then we keep inserting cur -> next after pre until cur becomes the last node. This idea uses three pointers (pre, cur and temp). You may implement it as follows.
Solution1 迭代
1 2 3 4 5 6 7 8 9 10 11 12
/迭代法 public ListNode reverseList(ListNode head) { ListNode pre = null; ListNode cur = head; while(cur!=null) { ListNode next = cur.next; cur.next = pre; pre = cur; cur = next; } return pre; }
Solution2 递归
1 2 3 4 5 6 7 8 9 10 11
//尾递归 public ListNode reverseList(ListNode head) { return reverse(null,head); }
Well, remember to take advantage of the property of binary search trees, which is, node -> left -> val < node -> val < node -> right -> val. Moreover, both p and q will be the descendants of the root of the subtree that contains both of them. And the root with the largest depth is just the lowest common ancestor. This idea can be turned into the following simple recursive code.
The approach is pretty intuitive. Traverse the tree in a depth first manner. The moment you encounter either of the nodes p or q, return some boolean flag. The flag helps to determine if we found the required nodes in any of the paths. The least common ancestor would then be the node for which both the subtree recursions return a True flag. It can also be the node which itself is one of p or q and for which one of the subtree recursions returns a True flag.
Let us look at the formal algorithm based on this idea.
Algorithm
Start traversing the tree from the root node. If the current node itself is one of p or q, we would mark a variable mid as True and continue the search for the other node in the left and right branches. If either of the left or the right branch returns True, this means one of the two nodes was found below. If at any point in the traversal, any two of the three flags left, right or mid become True, this means we have found the lowest common ancestor for the nodes p and q.
Complexity Analysis
Time Complexity: O(N)O(N), where NN is the number of nodes in the binary tree. In the worst case we might be visiting all the nodes of the binary tree.
Space Complexity: O(N)O(N). This is because the maximum amount of space utilized by the recursion stack would be NN since the height of a skewed binary tree could be NN.
Approach 2: Iterative using parent pointers Intuition
If we have parent pointers for each node we can traverse back from p and q to get their ancestors. The first common node we get during this traversal would be the LCA node. We can save the parent pointers in a dictionary as we traverse the tree.
Algorithm
Start from the root node and traverse the tree. Until we find p and q both, keep storing the parent pointers in a dictionary. Once we have found both p and q, we get all the ancestors for p using the parent dictionary and add to a set called ancestors. Similarly, we traverse through ancestors for node q. If the ancestor is present in the ancestors set for p, this means this is the first ancestor common between p and q (while traversing upwards) and hence this is the LCA node.
Complexity Analysis
Time Complexity : O(N)O(N), where NN is the number of nodes in the binary tree. In the worst case we might be visiting all the nodes of the binary tree.
Space Complexity : O(N)O(N). In the worst case space utilized by the stack, the parent pointer dictionary and the ancestor set, would be NN each, since the height of a skewed binary tree could be NN.
Solution You need to understand preorder and postorder traversal first, and then go ahead.
Basic idea is:
preorder[0] is the root node of the tree preorder[x] is a root node of a sub tree In in-order traversal When inorder[index] is an item in the in-order traversal inorder[0]-inorder[index-1] are on the left branch inorder[index+1]-inorder[size()-1] are on the right branch if there is nothing on the left, that means the left child of the node is NULL if there is nothing on the right, that means the right child of the node is NULL Algorithm:
Start from rootIdx 0 Find preorder[rootIdx] from inorder, let's call the index pivot Create a new node with inorder[pivot] Create its left child recursively Create its right child recursively Return the created node. The implementation is self explanatory. Have a look :)
find midpoint by fast/slow method, use middle node as root. build left child by first half of the list build right child by second half of the list (head is midpoint->next)
DFS from the root down to it's descendants: We need to keep current path (which stores elements in the path) so far. We need to keep the remain targetSum so far (after minus value of elements in the path). If we already reach into leaf node Check if targetSum == 0 then we found a valid path from root to leaf node which sum equal to targetSum, so add current path to the answer. Else dfs on left node and on the right node.
Complexity
Time: O(N^2), where N <= 5000 is the number of elements in the binary tree.
First, we think the time complexity is O(N) because we only visit each node once. But we forgot to calculate the cost to copy the current path when we found a valid path, which in the worst case can cost O(N^2), let see the following example for more clear.
Extra Space (without counting output as space): O(H), where H is height of the binary tree. This is the space for stack recursion or keeping path so far.
Approach 1: Recursive Solution The current solution is very simple. We make use of a function construct(nums, l, r), which returns the maximum binary tree consisting of numbers within the indices ll and rr in the given numsnums array(excluding the r^{th}r th element).
The algorithm consists of the following steps:
Start with the function call construct(nums, 0, n). Here, nn refers to the number of elements in the given numsnums array.
Find the index, max_imax i , of the largest element in the current range of indices (l:r-1)(l:r−1). Make this largest element, nums[max\_i]nums[max_i] as the local root node.
Determine the left child using construct(nums, l, max_i). Doing this recursively finds the largest element in the subarray left to the current largest element.
Similarly, determine the right child using construct(nums, max_i + 1, r).
int diameterOfBinaryTree(TreeNode* root) { //base case 1: if(!root) return 0; //recursively, call the helper function with the root helper(root); //return the diameter value return dia; }
int helper(TreeNode* node){ //base case 2 if(!node) return 0; //initialize two pointers, left & right ones int leftNode=helper(node->left); int rightNode=helper(node->right); //updated diameter value is the max path between two nodes-> path between two leaves dia=max(dia, leftNode+rightNode); //for each node-> add the diameter by one, as you pass an edge return 1+max(leftNode, rightNode); } };
What is level in our binary tree? It is set of nodes, for which distance between root and these nodes are constant. And if we talk about distances, it can be a good idea to use bfs.
We put our root into queue, now we have level 0 in our queue. On each step extract all nodes from queue and put their children to to opposite end of queue. In this way we will have full level in the end of each step and our queue will be filled with nodes from the next level. In the end we just return result. Complexity Time complexity is O(n): we perform one bfs on our tree. Space complexity is also O(n), because we have answer of this size.
Assuming after traversing the 1st level, nodes in queue are {9, 20, 8}, And we are going to traverse 2nd level, which is even line and should print value from right to left [8, 20, 9].
We know there are 3 nodes in current queue, so the vector for this level in final result should be of size 3. Then, queue [i] -> goes to -> vector[queue.size() - 1 - i] i.e. the ith node in current queue should be placed in (queue.size() - 1 - i) position in vector for that line.
For example, for node(9), it's index in queue is 0, so its index in vector should be (3-1-0) = 2.
olution You need to understand preorder and postorder traversal first, and then go ahead.
Basic idea is:
preorder[0] is the root node of the tree preorder[x] is a root node of a sub tree In in-order traversal When inorder[index] is an item in the in-order traversal inorder[0]-inorder[index-1] are on the left branch inorder[index+1]-inorder[size()-1] are on the right branch if there is nothing on the left, that means the left child of the node is NULL if there is nothing on the right, that means the right child of the node is NULL Algorithm:
Start from rootIdx 0 Find preorder[rootIdx] from inorder, let's call the index pivot Create a new node with inorder[pivot] Create its left child recursively Create its right child recursively Return the created node. The implementation is self explanatory. Have a look :)
Iterative solution using stack --- O(n) time and O(n) space; Recursive solution --- O(n) time and O(n) space (function call stack); Morris traversal --- O(n) time and O(1) space.
The basic idea is to maintain a hash table for each element num in nums, using num as key and its index (0-based) as value. For each num, search for target - num in the hash table. If it is found and is not the same element as num, then we are done.
The code is as follows. Note that each time before we add num to mp, we search for target - num first and so we will not hit the same element.
The two pointers pattern requires the array to be sorted, so we do that first. Also, it's easier to deal with duplicates if the array is sorted: repeated values are next to each other and easy to skip.
For 3Sum, we enumerate each value in a single loop, and use the two pointers pattern for the rest of the array. For kSum, we will have k - 2 nested loops to enumerate all combinations of k - 2 values.
给定一个只包括 ‘(‘,’)’,’{‘,’}’,’[‘,’]’ 的字符串 s ,判断字符串是否有效。
有效字符串需满足:
左括号必须用相同类型的右括号闭合。 左括号必须以正确的顺序闭合。
示例 1:
输入:s = “()” 输出:true 示例 2:
输入:s = “()[]{}” 输出:true 示例 3:
输入:s = “(]” 输出:false 示例 4:
输入:s = “([)]” 输出:false 示例 5:
输入:s = “{[]}” 输出:true
Solution1 栈
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
class Solution { public: bool isValid(string s) { stack<char> st; //taking stack for keep tracking the order of the brackets.. for(auto i:s) //iterate over each and every elements { if(i=='(' or i=='{' or i=='[') st.push(i); //if current element of the string will be opening bracket then we will just simply push it into the stack else //if control comes to else part, it means that current element is a closing bracket, so check two conditions current element matches with top of the stack and the stack must not be empty... { if(st.empty() or (st.top()=='(' and i!=')') or (st.top()=='{' and i!='}') or (st.top()=='[' and i!=']')) return false; st.pop(); //if control reaches to that line, it means we have got the right pair of brackets, so just pop it. } } return st.empty(); //at last, it may possible that we left something into the stack unpair so return checking stack is empty or not.. } };
m == matrix.length n == matrix[i].length 1 <= m, n <= 10 -100 <= matrix[i][j] <= 100
1 2 3 4 5 6 7 8 9
spiral-matrix
Algorithm:
First we will iterate in to first row from left to right push back all the elements into a vector. After iterating, we change the top to second row (top++). Then we will iterate from new top to bottom and push back only right most elements of each row. After iterating, we change the right to second last column (right--). Then we will iterate in bottom row from right to left and pushback all the elements from new right to left. After iterating, we change the bottom to second last row (bottom--). Then we will iterate from new bottom to new top and push back only left most element. After iterating, we change the left to second column (left++). Repeat all these steps until left = right and top = bottom.
nitially sort the array and then push the first element into the answer for speculation. We have two condition if the first elements second part of ans array is greater than or equal to the second element first part of the interval array. The other condition we have to tackle is what if its not? then we push the particular element into the ans array which will be then be under speculation. interval: [[1,3],[2,6],[8,10],[15,18]] i We initally push the 1st element into the ans array: ans=[[1,3]] j j points to the latest pushed element Then we i is incremented. [[1,3],[2,6],[8,10],[15,18]] i Now the ans[j][1]>interval[i][0] this means there is a possiblity of merging so we merger them Remember the way we merge is to take the second element as max(ans[j][1],interval[i][1]) cuz imagine we have this [1,7][2,4] --->merge should be ---->[1,7]
ans=[[1,6]]
then we move i forward
[[1,3],[2,6],[8,10],[15,18]] i Since ans[j][1]<interval[i][0] thus not contributing to the merge. Thus we will push this into the ans array and speculate.
ans=[[1,6][8,10]] j <----j is moved forward i is moved forward [[1,3],[2,6],[8,10],[15,18]] i Since ans[j][1]<interval[i][0] thus not contributing to the merge. ans=[[1,6][8,10][15,18]] j
There is no trick for this problem. Some people used slow/fast pointers to find the tail node but I don't see the benefit (in the sense that it doesn't reduce the pointer move op) to do so. So I just used one loop to find the length first.
class Solution { public: ListNode* rotateRight(ListNode* head, int k) { if(!head) return head; int len=1; // number of nodes ListNode *newH, *tail; newH=tail=head; while(tail->next) // get the number of nodes in the list { tail = tail->next; len++; } tail->next = head; // circle the link
if(k %= len) { for(auto i=0; i<len-k; i++) tail = tail->next; // the tail node is the (len-k)-th node (1st node is head) } newH = tail->next; tail->next = NULL; return newH; } };
My solution is nothing special and isn't clever at all. I decided to post it since I thought the "official" solution article from leetcode was very poorly written and confused me more, even after I solved it on my own.
So, I believe my comments below should explain the idea, but I want to add that it helps to test the more obscure test cases for this problem to understand the algorithm. For example:
[9] [9090] class Solution { public: vector<int> plusOne(vector<int>& digits) { int n = digits.size() - 1; for (int i = n; i >= 0; --i) { // traverse digits from the last element (least significant) // since we begin with the last digit, increasing that digit by one // results in overflow. Therefore, all elements PRIOR to digits[0] // need to be considered since there may be additional nines between // digits[0], ... , digits[n]. if (digits[i] == 9) { digits[i] = 0; } else { // current digit is not 9 so we can safely increment by one digits[i] += 1; return digits; } } // if the program runs to this point, each 9 is now a 0. // to get a correct solution, we need to add one more element with // a value of zero AND set digits[0] to 1 (in the most significant position) // to account for the carry digit. digits.push_back(0); digits[0] = 1; return digits; } };
class Solution { public: void sortColors(vector<int>& nums) { int n = nums.size(); int ptr = 0; for (int i = 0; i < n; ++i) { if (nums[i] == 0) { swap(nums[i], nums[ptr]); ++ptr; } } for (int i = ptr; i < n; ++i) { if (nums[i] == 1) { swap(nums[i], nums[ptr]); ++ptr; } } } };
Solution3 双指针
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
class Solution { public: void sortColors(vector<int>& nums) { int n = nums.size(); int p0 = 0, p2 = n - 1; for (int i = 0; i <= p2; ++i) { while (i <= p2 && nums[i] == 2) { swap(nums[i], nums[p2]); --p2; } if (nums[i] == 0) { swap(nums[i], nums[p0]); ++p0; } } } };
noticed that the solutions posted here are too long and complicated. They use unnecessary variables and/or checks etc. The solution can be much more concise. Here is my solution:
class Solution { public: ListNode *deleteDuplicates(ListNode *head) { ListNode* cur = head; while (cur) { while (cur->next && cur->val == cur->next->val) cur->next = cur->next->next; cur = cur->next; } return head; } }; Note about freeing memory. We need to free memory when we delete a node. But don't use delete node; construct on an interview without discussing it with the interviewer. A list node can be allocated in many different ways and we can use delete node; only if we are sure that the nodes were allocated with new TreeNode(...);.
We can take two pointers before and after to keep track of the two linked lists as described above. These two pointers could be used two create two separate lists and then these lists could be combined to form the desired reformed list.
Algorithm
Initialize two pointers before and after. In the implementation we have initialized these two with a dummy ListNode. This helps to reduce the number of conditional checks we would need otherwise. You can try an implementation where you don't initialize with a dummy node and see it yourself!
Dummy Node Initialization
Iterate the original linked list, using the head pointer.
If the node's value pointed by head is lesser than x, the node should be part of the before list. So we move it to before list.
Else, the node should be part of after list. So we move it to after list.
Once we are done with all the nodes in the original linked list, we would have two list before and after. The original list nodes are either part of before list or after list, depending on its value.
Note: Since we traverse the original linked list from left to right, at no point would the order of nodes change relatively in the two lists. Another important thing to note here is that we show the original linked list intact in the above diagrams. However, in the implementation, we remove the nodes from the original linked list and attach them in the before or after list. We don't utilize any additional space. We simply move the nodes from the original list around.
Now, these two lists before and after can be combined to form the reformed list.
We did a dummy node initialization at the start to make implementation easier, you don't want that to be part of the returned list, hence just move ahead one node in both the lists while combining the two list. Since both before and after have an extra node at the front.
nums1.length == m + n nums2.length == n 0 <= m, n <= 200 1 <= m + n <= 200 -109 <= nums1[i], nums2[j] <= 109
进阶:你可以设计实现一个时间复杂度为 O(m + n) 的算法解决此问题吗?
This code relies on the simple observation that once all of the numbers from nums2 have been merged into nums1, the rest of the numbers in nums1 that were not moved are already in the correct place.
The way to think about the solution is that we will have to do a reverse sorting. We initialize k=m+n-1 as that will be the last location of nums1. We will keep checking for the greater element of the two arrays(i=m-1,j=n-1) and insert the values. nums1 = [1,2,3,0,0,0], m = 3 nums2 = [2,5,6], n = 3
nums1 = [1,2,3,0,0,0] | | i k
nums2 = [2,5,6] | j nums2[j]>nums1[i] thus nums1[k]=6 k and j are decremented.
nums1 = [1,2,3,0,0,6] | | i k
nums2 = [2,5,6] | j nums2[j]>nums1[i] thus nums1[k]=5 k and j are decremented.
nums1 = [1,2,3,0,5,6] | | i k
nums2 = [2,5,6] | j We keep following up this procedure and we get the desired reult.
void merge(vector<int>& nums1, int m, vector<int>& nums2, int n) { int i=m-1,j=n-1,k=m+n-1; while(i>=0&&j>=0) { if(nums1[i]>nums2[j]) { nums1[k]=nums1[i]; i--; k--; } else { nums1[k]=nums2[j]; j--; k--; } } while(i>=0) nums1[k--]=nums1[i--]; while(j>=0) nums1[k--]=nums2[j--]; }
class Solution { public: void merge(vector<int>& nums1, int m, vector<int>& nums2, int n) { int p1 = 0, p2 = 0; int sorted[m + n]; int cur; while (p1 < m || p2 < n) { if (p1 == m) { cur = nums2[p2++]; } else if (p2 == n) { cur = nums1[p1++]; } else if (nums1[p1] < nums2[p2]) { cur = nums1[p1++]; } else { cur = nums2[p2++]; } sorted[p1 + p2 - 1] = cur; } for (int i = 0; i != m + n; ++i) { nums1[i] = sorted[i]; } } };
class Solution { public: void merge(vector<int>& nums1, int m, vector<int>& nums2, int n) { int p1 = m - 1, p2 = n - 1; int tail = m + n - 1; int cur; while (p1 >= 0 || p2 >= 0) { if (p1 == -1) { cur = nums2[p2--]; } else if (p2 == -1) { cur = nums1[p1--]; } else if (nums1[p1] > nums2[p2]) { cur = nums1[p1--]; } else { cur = nums2[p2--]; } nums1[tail--] = cur; } } };
We define a recursion function that will do the job of reversing a portion of the linked list. Let's call this function recurse. The function takes in 3 parameters: m being the starting point of the reversal, n being the ending point for the reversal, and a pointer right which will start at the n^{th}n th node in the linked list and move backwards with the backtracking of the recursion. If this is not clear at the moment, the diagrams that follow will help. Additionally, we have a pointer called left which starts from the m^{th}m th node in the linked list and moves forward. In Python, we have to take a global variable for this which get's changed with recursion. In other languages, where changes made in function calls persist, we can consider this pointer as an additional variable for the function recurse. In a recursion call, given m, n, and right, we check if n == 1. If this is the case, we don't need to go any further. Until we reach n = 1, we keep moving the right pointer one step forward and after doing that, we make a recursive call with the value of n decreased by 1. At the same time, we keep on moving the left pointer forward until m == 1. When we refer to a pointer being moved forward, it essentially means pointer.next. So we backtrack as soon as n reaches 1. At that point of time, the right pointer is at the last node of the sublist we want to reverse and the left has already reached the first node of this sublist. So, we swap out the data and move the left pointer one step forward using left = left.next. We need this change to persist across the backtracking process. From there on, every time we backtrack, the right pointer moves one step backwards. This is the simulation we've been mentioning all along. The backward movement is simulated by backtracking. We stop the swaps when either right == left, which happens if the sublist size is odd, or, right.next == left which happens when during the backtracking process for an even sized sublist, the right pointer crosses left. We use a global boolean flag for stopping the swaps once these conditions are met. Let's look at a series of diagrams explaining the process on a sample linked list. Hopefully, things would be clearer after this.
This is the first step in the recursion process. We have a list given to us and the left and the right pointers start off from the head of the linked list. The first step makes a recursive call with updated values of m and n i.e. their values each reduced by 1. Also, the left and the right pointers move one step forward in the linked list.
The next two steps show the movement of the left and the right pointers in the list. Notice that after the second step, the left pointer reaches it's designated spot. So, we don't move it any further. Only the right pointer progresses from here on out until it reaches node 6.
As we can see, after the step 5, both the pointers are in their designated spots in the list and we can start the backtracking process. We don't recurse further. The operation performed during the backtracking is swapping of data between the left and right nodes.
The right pointer crosses the left pointer after step 3 (backtracking) as can be seen above and by that point, we have already reversed the required portion of the linked list. We needed the output list to be [7 → 9 → 8 → 1 → 10 → 2 → 6] and that's what we have. So, we don't perform any more swaps and in the code, we can use a global boolean flag to stop the swapping after a point. We can't really break out of recursion per say.
The pascal triangle problem has a very simple and intuitive dynamic programming approach. As the definition states, every element of a row is formed by the sum of the two numbers directly above. So, we can just apply DP and use the previously stored rows of trianlge to calculate the new rows.
We can just initialize the start and end elements of each row as 1 and update only the elements between them. This will make the code simpler and avoid the need of having extra checks for edge elements of each row.
bool isPalindrome(string s) { for (int i = 0, j = s.size() - 1; i < j; i++, j--) { // Move 2 pointers from each end until they collide while (isalnum(s[i]) == false && i < j) i++; // Increment left pointer if not alphanumeric while (isalnum(s[j]) == false && i < j) j--; // Decrement right pointer if no alphanumeric if (toupper(s[i]) != toupper(s[j])) return false; // Exit and return error if not match } return true; }
Alogrithm Description: Step 1: Determine whether there is a cycle
1.1) Using a slow pointer that move forward 1 step each time
1.2) Using a fast pointer that move forward 2 steps each time
1.3) If the slow pointer and fast pointer both point to the same location after several moving steps, there is a cycle;
1.4) Otherwise, if (fast->next == NULL || fast->next->next == NULL), there has no cycle.
Step 2: If there is a cycle, return the entry location of the cycle
2.1) L1 is defined as the distance between the head point and entry point
2.2) L2 is defined as the distance between the entry point and the meeting point
2.3) C is defined as the length of the cycle
2.4) n is defined as the travel times of the fast pointer around the cycle When the first encounter of the slow pointer and the fast pointer
According to the definition of L1, L2 and C, we can obtain:
the total distance of the slow pointer traveled when encounter is L1 + L2
the total distance of the fast pointer traveled when encounter is L1 + L2 + n * C
Because the total distance the fast pointer traveled is twice as the slow pointer, Thus:
2 * (L1+L2) = L1 + L2 + n * C => L1 + L2 = n * C => L1 = (n - 1) C + (C - L2)*
It can be concluded that the distance between the head location and entry location is equal to the distance between the meeting location and the entry location along the direction of forward movement.
So, when the slow pointer and the fast pointer encounter in the cycle, we can define a pointer "entry" that point to the head, this "entry" pointer moves one step each time so as the slow pointer. When this "entry" pointer and the slow pointer both point to the same location, this location is the node where the cycle begins.
If you never solved singly linked lists problems before, or you do not have a lot of experience, this problem can be quite difficult. However if you already know all the tricks, it is not difficult at all. Let us first try to understand what we need to do. For list [1,2,3,4,5,6,7] we need to return [1,7,2,6,3,5,4]. We can note, that it is actually two lists [1,2,3,4] and [7,6,5], where elements are interchange. So, to succeed we need to do the following steps:
Find the middle of or list - be careful, it needs to work properly both for even and for odd number of nodes. For this we can either just count number of elements and then divide it by to, and do two traversals of list. Or we can use slow/fast iterators trick, where slow moves with speed 1 and fast moves with speed 2. Then when fast reches the end, slow will be in the middle, as we need. Reverse the second part of linked list. Again, if you never done it before, it can be quite painful, please read oficial solution to problem 206. Reverse Linked List. The idea is to keep three pointers: prev, curr, nextt stand for previous, current and next and change connections in place. Do not forget to use slow.next = None, in opposite case you will have list with loop. Finally, we need to merge two lists, given its heads. These heads are denoted by head and prev, so for simplisity I created head1 and head2 variables. What we need to do now is to interchange nodes: we put head2 as next element of head1 and then say that head1 is now head2 and head2 is previous head1.next. In this way we do one step for one of the lists and rename lists, so next time we will take element from head2, then rename again and so on. Complexity: Time complexity is O(n), because we first do O(n) iterations to find middle, then we do O(n) iterations to reverse second half and finally we do O(n) iterations to merge lists. Space complexity is O(1).
1 <= capacity <= 3000 0 <= key <= 10000 0 <= value <= 105 最多调用 2 * 105 次 get 和 put
1 2 3 4 5 6 7 8
n this question we have to keep track of the most least recently used item in the cache. I have designed the cache using list and map in C++. We do it by following the steps below :-
When we access an item in the cache it moves to the front of the list as it is the most recently used item. When we want to remove an item we remove it from the last of the list as it is the least recently used item in the cache. When we insert an item we insert it into the front of the list as it is the most recently used item. The idea is to store the keys in the map and its corrosponding values into the list... Note : splice() function here takes the element at the m[key] and places it at the beginning of the list...
We have to return the linked list after sorting it in ascending order. Let's take an example not given in question - Suppose head of linked list given to us is like, head: [3,-9,8,67,9]
then answer should like [-9,3,8,9,67] after sorting it in ascending order. Solution - I (Using Merge sort, Accepted)-
We want to sort a linked list, then we may able to use any of the sorting algorithm and then apply on it. We will use merge sort here, because I find it easy to implement in linked list. Whole implementation totally based on the merge sort, so i strongly suggest you to read a article on the merge sort. Some basic thing that we will do in applying merge sort on our linked list are- We divide our linked liist into two equal parts until when only one element is left. After that we start merge them on the basis of value. Now, if we divide them into two equal parts then then how we will find mid in linked list. We find mid of linked list using tortise and hare method or say using fast and slow pointer. See commented code for more explanation.
Idea: Reverse Polish Notation was designed specifically to make computing easier with the more efficient use of a stack. So we can use a stack here to store numbers until they're used, and then each operand will use the top two values of the stack.
Since the order of the numbers is still important for subtraction and division, we'll have to make sure that the two numbers are processed in their original order, which is the opposite order of the stack.
After each successful operation, the result should be pushed back onto the stack until it's used. After iteration is complete, the remaining value in the stack will be our answer, so we should return stack[0].
Time Complexity: O(N) where N is the length of tokens Space Complexity: O(N) for the length of the stack, up to N / 2 + 1 values
1. There are 26 letters in our alphabet and we start counting from 1, not zero. So 'Z' is 26. 2. The rest of the combinations start from a base 26
AA --> 26*1+ 1 = 27 (A == 1) AB --> 26*1+ 2 = 28 (B == 2) AC -->26*1 + 3 = 29 (C == 3) .....
So we can write like this:
result = 0 d = s[i](char) - 'A' + 1 (we used s[i] - 'A' to convert the letter to a number like it's going to be C) result = result* 26 + d
If the given input is only one letter, it will automatically take the value s[i] - 'A' + 1 as the first result is 0. Some More Explanation 1. For every additional digit of the string, we multiply the value of the digit by 26^n 2. here n is the number of digits it is away from the one's place. 3. This is similar to how the number 254 could be broken down as this: (2 x 10 x 10) + (5 x 10) + (4). 4. The reason we use 26 instead of 10 is because 26 is our base.
For s = "BCM" the final solution would be (2 x 26 x 26) + (3 x 26) + (13)
We could do this process iteratively. Start at looking at the first digit "B". Add the int equivalent of "B" to the running sum and continue. Every time we look at the following digit multiply our running sum by 26 before adding the next digit to signify we are changing places. Example below:
"B" = 2 "BC" = (2)26 + 3 "BCM" = (2(26) + 3)26 + 13 Time Complexity : O(n) one scan of string , n is number of characters in the string
I came up with this simple solution using just a single stack. Here I am using Stack of Pair of Int. The first value of the pair would store the element of the normal stack and the second value would store the minimum up to that point in the stack. So even if the minimum element of the stack is removed from the top, we still have a backup of the next minimum element in the pair. So for every element pushed in the stack, it stores its corresponding minimum value.
For example, let's do a Dry Run of an example.
["MinStack","push","push","push","push","push","getMin","pop","pop","top","push","getMin"] [[],[5],[-2],[3],[-10],[20],[],[],[],[],[30],[]] We push 5,-2,3,-10,20 in the stack. If the stack is empty we push {val,val} in the stack else we push {val,min(s.top().second,val)} which is basically minimum upto that point. Hence {5,5},{-2,-2},{3,-2},{-10,-10},{20,-10} are pushed in the stack. To pop simply do stack.pop() To get the top return stack.top().first; Now we pop 20 and -10 from the stack The elements in the stack would be {5,5},{-2,-2},{3,-2} On pushing 30 to the stack The elements in the stack would be {5,5},{-2,-2},{3,-2},{30,-2}. The Output of the code would be:
[null,null,null,null,null,null,-10,null,null,3,null,-2] All the operations are one liners expect the Push operation which is a 2 liner.
class MinStack { public: vector< pair<int,int> > s; MinStack() { } void push(int val) { if(s.empty()) s.push_back({val,val}); else s.push_back({val,min(s.back().second,val)}); } void pop() { s.pop_back(); } int top() { return s.back().first; } int getMin() { return s.back().second; } }; The Time complexity of each operation is O(1) The Space complexity is O(N)
listA 中节点数目为 m listB 中节点数目为 n 1 <= m, n <= 3 * 104 1 <= Node.val <= 105 0 <= skipA <= m 0 <= skipB <= n 如果 listA 和 listB 没有交点,intersectVal 为 0 如果 listA 和 listB 有交点,intersectVal == listA[skipA] == listB[skipB]
进阶:你能否设计一个时间复杂度 O(m + n) 、仅用 O(1) 内存的解决方案?
1 2 3 4 5 6 7 8 9 10 11
O ( 1 ) SPACE SOLUTION
First using constant space check for last element of both lists. If tails of both lists are different then return NULL
Now we know that intersection length will be same for both lists. So we want to make length prior to the intersection also equal. Head pointer of the longer list is moved to next till length of both lists become equal
NOW we will have intersetion point at the same distance from head for both the lists.
Approach 1: Brute Force Intuition We can exhaust the search space in quadratic time by checking whether each element is the majority element.
Algorithm The brute force algorithm iterates over the array, and then iterates again for each number to count its occurrences. As soon as a number is found to have appeared more than any other can possibly have appeared, return it.
Complexity Analysis Time complexity : O(n^2)
The brute force algorithm contains two nested for loops that each run for nn iterations, adding up to quadratic time complexity.
Space complexity : O(1)
The brute force solution does not allocate additional space proportional to the input size.
To construct the largest number, we want to ensure that the most significant digits are occupied by the largest digits.
Algorithm
First, we convert each integer to a string. Then, we sort the array of strings.
While it might be tempting to simply sort the numbers in descending order, this causes problems for sets of numbers with the same leading digit. For example, sorting the problem example in descending order would produce the number 95343039534303, while the correct answer can be achieved by transposing the 33 and the 3030. Therefore, for each pairwise comparison during the sort, we compare the numbers achieved by concatenating the pair in both orders. We can prove that this sorts into the proper order as follows:
Assume that (without loss of generality), for some pair of integers aa and bb, our comparator dictates that aa should precede bb in sorted order. This means that a\frown b > b\frown aa⌢b>b⌢a (where \frown⌢ represents concatenation). For the sort to produce an incorrect ordering, there must be some cc for which bb precedes cc and cc precedes aa. This is a contradiction because a\frown b > b\frown aa⌢b>b⌢a and b\frown c > c\frown bb⌢c>c⌢b implies a\frown c > c\frown aa⌢c>c⌢a. In other words, our custom comparator preserves transitivity, so the sort is correct.
Once the array is sorted, the most "signficant" number will be at the front. There is a minor edge case that comes up when the array consists of only zeroes, so if the most significant number is 00, we can simply return 00. Otherwise, we build a string out of the sorted array and return it.
Simple sliding window solution. comments added for better explanation.
class Solution { public: vector<string> findRepeatedDnaSequences(string s) { vector<string> ans; map<string,int> mmap; //storing the first 10 size substring(dna sequence) //in temp string temp=s.substr(0,10); //adding first dna sequence to map mmap[temp]++; //now the sliding window. for(int i=10;i<s.length();i++) { //remove first character from exsisting substring temp=temp.substr(1); //add the next character in substring. temp=temp+s[i]; //add the new dna sequence to map. mmap[temp]++; //if the count of given sequence is greater than 2 //and it is not present in out ans vector push it in //it //we have done the find operation to keep the elements in answer vector //unique. //for example if aa...a sequence is present 4 times, it will adding 4 times //in ans according to our sliding window logic. but we want it only one time. //therefore we check in our vector if the given dna sequence is already present or not/ if(mmap[temp]>1 and find(ans.begin(),ans.end(),temp)==ans.end()) { ans.push_back(temp); } } return ans; } };
Solution1 哈希表
我们可以用一个哈希表统计 ss 所有长度为 1010 的子串的出现次数,返回所有出现次数超过 1010 的子串。
class Solution { const int L = 10; public: vector<string> findRepeatedDnaSequences(string s) { vector<string> ans; unordered_map<string, int> cnt; int n = s.length(); for (int i = 0; i <= n - L; ++i) { string sub = s.substr(i, L); if (++cnt[sub] == 2) { ans.push_back(sub); } } return ans; } };
// o(n) This solution is simply reversing the array and the reversing array from 0 to k-1 and then from k to n-1. Do a dry run by taking an example on copy and you will usndersand it.
// o(n) Using extra space approach we simply store the last k elemets in same order from n-k to n-1 in a temp vector and we then pushback the reaining elements in the temp vector from index 0 to n-k-1;
/ o(n*k) we roatate elements of the vector one by one for k times and achieve k roatations.
Solution1 使用额外的数组
Solution2 数组翻转
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
class Solution { public: void reverse(vector<int>& nums, int start, int end) { while (start < end) { swap(nums[start], nums[end]); start += 1; end -= 1; } }
void rotate(vector<int>& nums, int k) { k %= nums.size(); reverse(nums, 0, nums.size() - 1); reverse(nums, 0, k - 1); reverse(nums, k, nums.size() - 1); } };
The hash-set solution is very straightforward. For every new data, we check whether it is already in the set. If no, we insert it into the set. If yes, we detect the loop. Only when the node in the loop is "1", the number is happy number.
2.hash-map
The idea is similar as hase-set. We check the node value to check whether it is in the loop.
The code is as follow. The time complexity usually is O(1) (the worst may be O(n) due to conflict)
3.Floyd's Cycle detection algorithm
Floyd's cycle detection algorithm is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. Obviously, if there is a loop, they will meet in the loop. It is also called the "tortoise and the hare algorithm"
4.Brent's Cycle detection algorithm
Brent's algorithm features a moving rabbit and a stationary, then teleporting, turtle. Both turtle and rabbit start at the top of the list. The rabbit takes one step per iteration. Every once in a while, we teleport the turtle to the rabbit's position, and let the rabbit continue moving. We start out waiting just 2 steps before teleportation, and we double that each time we move the turtle. If there is a loop, they will meet in the loop.
The code is as follows. The time complexity is O(λ + μ)*. However you're doing less stepping than with Floyd's (in fact the upper bound for steps is the number you would do with Floyd's algorithm). According to Brent's research, his algorithm is 24-36% faster on average for implicit linked list algorithms.(However, it cost same time as the Floyd's in the OJ ;) )
A simple solution to delete the nodes having value T is to traverse over the linked list and just remove the next pointers to the node having value as T. Now, usually in deletion problem of linked list, there can be multiple cases where node to be deleted is either a head node or other node in rest of list. We usually make use of a dummy node at the start or sentinel node to avoid handling multiple edge cases and write a clean uniform solution.
So, the algorithm we are using can be summarised as -
Initialize a dummy/sentinel node having its next pointer pointing to the head of linked list and another node pointer prev pointing to this dummy node. Start iterating over head of linked list If current node's value is not equal to T, we can just move to next node without deleting current node. In this case, We first update prev pointer and point it to current head Then move head to next node. Otherwise, if head -> val == T, we know that this node needs to be deleted. In this case, We can just update the next pointer of previous node to the next pointer of current node. This will basically remove the current node from list. Then, we update head to its next node just as in previous case. Finally, ignore the dummy node created at start and return its next node.
Solution1 迭代
Solution2 递归
1 2 3 4 5 6 7
ListNode *removeElements(ListNode *head, int val) { if (!head) return head; head->next = removeElements(head->next, val); return head->val == val ? head->next : head; }
复杂度分析 时间复杂度:O(n)O(n),其中 nn 是链表的长度。需要遍历链表一次。 空间复杂度:O(1)O(1)。
This problem is well known and quite often can be found in various text books.
You can take a couple of approaches to actually solve it:
O(N lg K) running time + O(K) memory Other possibility is to use a min oriented priority queue that will store the K-th largest values. The algorithm iterates over the whole input and maintains the size of priority queue.
O(N) best case / O(N^2) worst case running time + O(1) memory The smart approach for this problem is to use the selection algorithm (based on the partion method - the same one as used in quicksort).
O(N) guaranteed running time + O(1) space
So how can we improve the above solution and make it O(N) guaranteed? The answer is quite simple, we can randomize the input, so that even when the worst case input would be provided the algorithm wouldn't be affected. So all what it is needed to be done is to shuffle the input.
This problem seems trivial, so lets try different approaches to solve it:
Starting from worst time complexity to the best one:
Time complexity: O(N^2), memory: O(1)
The naive approach would be to run a iteration for each element and see whether a duplicate value can be found: this results in O(N^2) time complexity.
Time complexity: O(N lg N), memory: O(1) - not counting the memory used by sort
Since it is trivial task to find duplicates in sorted array, we can sort it as the first step of the algorithm and then search for consecutive duplicates.
Time complexity: O(N), memory: O(N)
Finally we can used a well known data structure hash table that will help us to identify whether an element has been previously encountered in the array.
This is trivial but quite nice example of space-time tradeoff.
class MyStack { public: queue<int> queue1; queue<int> queue2;
/** Initialize your data structure here. */ MyStack() {
}
/** Push element x onto stack. */ void push(int x) { queue2.push(x); while (!queue1.empty()) { queue2.push(queue1.front()); queue1.pop(); } swap(queue1, queue2); } /** Removes the element on top of the stack and returns that element. */ int pop() { int r = queue1.front(); queue1.pop(); return r; } /** Get the top element. */ int top() { int r = queue1.front(); return r; } /** Returns whether the stack is empty. */ bool empty() { return queue1.empty(); } };
class Solution { public: int calculate(string s) { vector<int> stk; char preSign = '+'; int num = 0; int n = s.length(); for (int i = 0; i < n; ++i) { if (isdigit(s[i])) { num = num * 10 + int(s[i] - '0'); } if (!isdigit(s[i]) && s[i] != ' ' || i == n - 1) { switch (preSign) { case '+': stk.push_back(num); break; case '-': stk.push_back(-num); break; case '*': stk.back() *= num; break; default: stk.back() /= num; } preSign = s[i]; num = 0; } } return accumulate(stk.begin(), stk.end(), 0); } };
复杂度分析
时间复杂度:O(n)O(n),其中 nn 为字符串 ss 的长度。需要遍历字符串 ss 一次,计算表达式的值。
空间复杂度:O(n)O(n),其中 nn 为字符串 ss 的长度。空间复杂度主要取决于栈的空间,栈的元素个数不超过 nn。
解释: MyQueue myQueue = new MyQueue(); myQueue.push(1); // queue is: [1] myQueue.push(2); // queue is: [1, 2] (leftmost is front of the queue) myQueue.peek(); // return 1 myQueue.pop(); // return 1, queue is [2] myQueue.empty(); // return false
给你一个单链表的头节点 head ,请你判断该链表是否为回文链表。如果是,返回 true ;否则,返回 false 。
示例 1:
输入:head = [1,2,2,1] 输出:true 示例 2:
输入:head = [1,2] 输出:false
提示:
链表中节点数目在范围[1, 105] 内 0 <= Node.val <= 9
进阶:你能否用 O(n) 时间复杂度和 O(1) 空间复杂度解决此题?
1 2 3 4 5 6 7 8 9 10 11 12 13
Functions used below are HEART n SOUL of linkedlist questions. Usually, any LinkedList question can be broken down to these functions:-
Reverse ===> Used for space optimization Find Mid ===> Slow-Fast Pointer Iteration : normal iter, recursive iter, adjacent node 2-vars, slow-fast Insert : start , mid, last delete : start, mid, last THIS QUESTION: find Mid of linkedlist --------> do see cases of even/odd length on paper reverse second half from mid pointer ----> see how prev gets changed Now compare first and second half Easy huh! 😜 EdgeCase: linkedlist size =0,1
Solution0 将链表复制到数组
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
class Solution { public: bool isPalindrome(ListNode* head) { vector<int> vals; while (head != nullptr) { vals.emplace_back(head->val); head = head->next; } for (int i = 0, j = (int)vals.size() - 1; i < j; ++i, --j) { if (vals[i] != vals[j]) { return false; } } return true; } };
class Solution { public void deleteNode(ListNode node) { node.val=node.next.val; node.next=node.next.next; } } So, this is our code. We don't have the access to the previous node of the to be deleted node. But we have the access to the next node, which makes deletion of next node possible. So, we copy the value of the next node to this node and delete the next node (i.e connecting our current node to the next node's next)
✔️ Solution - I (Calculate product & divide by self)
We can simply calculate product of the whole array and for each element in nums, divide the product by nums[i]. This effectively leaves us with product of whole array except self at each index. We need to take care of zeros that may occur in the array -
1. If there are more than one 0s in nums, the result is an array consisting of all 0. 2. If there is a single 0 in nums, then the result is an array consisting of all 0 except at the index where there was 0 in nums, which will contain product of rest of array. 3. If there's no 0 in nums, then the result is an array ans where ans[i] = prod / nums[i] (prod = product of all elements in nums).
class Solution { public: vector<int> productExceptSelf(vector<int>& nums) { int length = nums.size(); vector<int> answer(length);
// answer[i] 表示索引 i 左侧所有元素的乘积 // 因为索引为 '0' 的元素左侧没有元素, 所以 answer[0] = 1 answer[0] = 1; for (int i = 1; i < length; i++) { answer[i] = nums[i - 1] * answer[i - 1]; }
// R 为右侧所有元素的乘积 // 刚开始右边没有元素,所以 R = 1 int R = 1; for (int i = length - 1; i >= 0; i--) { // 对于索引 i,左边的乘积为 answer[i],右边的乘积为 R answer[i] = answer[i] * R; // R 需要包含右边所有的乘积,所以计算下一个结果时需要将当前值乘到 R 上 R *= nums[i]; } return answer; } };
This idea uses a hash table to record the times of appearances of each letter in the two strings s and t. For each letter in s, it increases the counter by 1 while for each letter in t, it decreases the counter by 1. Finally, all the counters will be 0 if they two are anagrams of each other.
The first implementation uses the built-in unordered_map and takes 36 ms.
class Solution { public: bool isAnagram(string s, string t) { if (s.length() != t.length()) return false; int n = s.length(); unordered_map<char, int> counts; for (int i = 0; i < n; i++) { counts[s[i]]++; counts[t[i]]--; } for (auto count : counts) if (count.second) return false; return true; } }; Since the problem statement says that "the string contains only lowercase alphabets", we can simply use an array to simulate the unordered_map and speed up the code. The following implementation takes 12 ms.
class Solution { public: bool isAnagram(string s, string t) { if (s.length() != t.length()) return false; int n = s.length(); int counts[26] = {0}; for (int i = 0; i < n; i++) { counts[s[i] - 'a']++; counts[t[i] - 'a']--; } for (int i = 0; i < 26; i++) if (counts[i]) return false; return true; } }; Sorting
For two anagrams, once they are sorted in a fixed order, they will become the same. This code is much shorter (this idea can be done in just 1 line using Python as here). However, it takes much longer time --- 76 ms in C++.
class Solution { public: bool isAnagram(string s, string t) { sort(s.begin(), s.end()); sort(t.begin(), t.end()); return s == t; } };
Math's Explained :- Any number where it's digits add to 9 is always divisible by 9. (18, 27, 36, 45, 54, 63, 72, 81, 90, etc.) Therefore the 'digital root' for any number divisible by 9 is always 9. You can see this even in larger numbers like 99 because 9 + 9 = 18, and then 1 + 8 = 9 still, so the root always becomes 9 for any numbers divisible by 9.
Additionally, 0 always has a digital root of 0 obviously.
The only other cases you need to worry about to find the digital root are when it isn't 0 or 9.
So for any number that isn't 0 and isn't divisible by 9, the root will always n % 9 for a given number n. (AKA the difference between given number n and the nearest number that is divisible by 9, since numbers divisible by 9 always have a digital root of 9). For examples: 100 % 9 = 1 (one greater than 99, which is divisible by 9). 101 % 9 = 2 102 % 9 = 3 and so on.
This explanation/algorithm skips the whole "add digits until there is only 1 remaining", so the description of this problem seems pretty misleading to me since it makes you think the solution will be something unrelated to the optimal one. I guess the point of Leetcode is to learn all of these tricks though.
Solution1 : 找规律
除了传统的单纯循环,还可以找规律。假如一个三位数’abc’,其值大小为s1 = 100 * a + 10 * b + 1 * c,经过一次各位相加后,变为s2 = a + b + c,减小的差值为(s1 -s2) = 99 * a + 9 * b,差值可以被9整除,每一个循环都这样,缩小了9的倍数。当num小于9,即只有一位时,直接返回num,大于9时,如果能被9整除,则返回9(因为不可能返回0也不可能返回两位数及以上的值),如果不能被整除,就返回被9除的余数。
1 2 3 4 5 6 7
class Solution: def addDigits(self, num: int) -> int: if num > 9: num = num % 9 if num == 0: return 9 return num
Solution This question comes under a broad category of "Array Transformation". This category is the meat of tech interviews. Mostly because arrays are such a simple and easy to use data structure. Traversal or representation doesn't require any boilerplate code and most of your code will look like the Pseudocode itself.
The 2 requirements of the question are:
Move all the 0's to the end of array.
All the non-zero elements must retain their original order.
It's good to realize here that both the requirements are mutually exclusive, i.e., you can solve the individual sub-problems and then combine them for the final solution.
Approach #1 (Space Sub-Optimal) [Accepted] C++
void moveZeroes(vector<int>& nums) { int n = nums.size();
// Count the zeroes int numZeroes = 0; for (int i = 0; i < n; i++) { numZeroes += (nums[i] == 0); }
// Make all the non-zero elements retain their original order. vector<int> ans; for (int i = 0; i < n; i++) { if (nums[i] != 0) { ans.push_back(nums[i]); } }
// Move all zeroes to the end while (numZeroes--) { ans.push_back(0); }
// Combine the result for (int i = 0; i < n; i++) { nums[i] = ans[i]; } } Complexity Analysis
Space Complexity : O(n)O(n). Since we are creating the "ans" array to store results.
Time Complexity: O(n)O(n). However, the total number of operations are sub-optimal. We can achieve the same result in less number of operations.
If asked in an interview, the above solution would be a good start. You can explain the interviewer(not code) the above and build your base for the next Optimal Solution.
Approach #2 (Space Optimal, Operation Sub-Optimal) [Accepted] This approach works the same way as above, i.e. , first fulfills one requirement and then another. The catch? It does it in a clever way. The above problem can also be stated in alternate way, " Bring all the non 0 elements to the front of array keeping their relative order same".
This is a 2 pointer approach. The fast pointer which is denoted by variable "cur" does the job of processing new elements. If the newly found element is not a 0, we record it just after the last found non-0 element. The position of last found non-0 element is denoted by the slow pointer "lastNonZeroFoundAt" variable. As we keep finding new non-0 elements, we just overwrite them at the "lastNonZeroFoundAt + 1" 'th index. This overwrite will not result in any loss of data because we already processed what was there(if it were non-0,it already is now written at it's corresponding index,or if it were 0 it will be handled later in time).
After the "cur" index reaches the end of array, we now know that all the non-0 elements have been moved to beginning of array in their original order. Now comes the time to fulfil other requirement, "Move all 0's to the end". We now simply need to fill all the indexes after the "lastNonZeroFoundAt" index with 0.
C++
void moveZeroes(vector<int>& nums) { int lastNonZeroFoundAt = 0; // If the current element is not 0, then we need to // append it just in front of last non 0 element we found. for (int i = 0; i < nums.size(); i++) { if (nums[i] != 0) { nums[lastNonZeroFoundAt++] = nums[i]; } } // After we have finished processing new elements, // all the non-zero elements are already at beginning of array. // We just need to fill remaining array with 0's. for (int i = lastNonZeroFoundAt; i < nums.size(); i++) { nums[i] = 0; } } Complexity Analysis
Space Complexity : O(1)O(1). Only constant space is used.
Time Complexity: O(n)O(n). However, the total number of operations are still sub-optimal. The total operations (array writes) that code does is nn (Total number of elements).
Approach #3 (Optimal) [Accepted] The total number of operations of the previous approach is sub-optimal. For example, the array which has all (except last) leading zeroes: [0, 0, 0, ..., 0, 1].How many write operations to the array? For the previous approach, it writes 0's n-1n−1 times, which is not necessary. We could have instead written just once. How? ..... By only fixing the non-0 element,i.e., 1.
The optimal approach is again a subtle extension of above solution. A simple realization is if the current element is non-0, its' correct position can at best be it's current position or a position earlier. If it's the latter one, the current position will be eventually occupied by a non-0 ,or a 0, which lies at a index greater than 'cur' index. We fill the current position by 0 right away,so that unlike the previous solution, we don't need to come back here in next iteration.
In other words, the code will maintain the following invariant:
All elements before the slow pointer (lastNonZeroFoundAt) are non-zeroes.
All elements between the current and slow pointer are zeroes.
Therefore, when we encounter a non-zero element, we need to swap elements pointed by current and slow pointer, then advance both pointers. If it's zero element, we just advance current pointer.
With this invariant in-place, it's easy to see that the algorithm will work.
C++
void moveZeroes(vector<int>& nums) { for (int lastNonZeroFoundAt = 0, cur = 0; cur < nums.size(); cur++) { if (nums[cur] != 0) { swap(nums[lastNonZeroFoundAt++], nums[cur]); } } } Complexity Analysis
Space Complexity : O(1)O(1). Only constant space is used.
Time Complexity: O(n)O(n). However, the total number of operations are optimal. The total operations (array writes) that code does is Number of non-0 elements.This gives us a much better best-case (when most of the elements are 0) complexity than last solution. However, the worst-case (when all elements are non-0) complexity for both the algorithms is same.
Solution1
1 2 3 4 5 6 7 8 9 10 11 12 13 14
void moveZeroes(int* nums, int numsSize) { int i = 0,j = 0; for(i = 0 ; i < numsSize; i++) { if(nums[i] != 0) { nums[j++] = nums[i]; } } while(j < numsSize) { nums[j++] = 0; } }
Solution You can always win a Nim game if the number of stones nn in the pile is not divisible by 44.
Reasoning
Let us think of the small cases. It is clear that if there are only one, two, or three stones in the pile, and it is your turn, you can win the game by taking all of them. Like the problem description says, if there are exactly four stones in the pile, you will lose. Because no matter how many you take, you will leave some stones behind for your opponent to take and win the game. So in order to win, you have to ensure that you never reach the situation where there are exactly four stones on the pile on your turn.
Similarly, if there are five, six, or seven stones you can win by taking just enough to leave four stones for your opponent so that they lose. But if there are eight stones on the pile, you will inevitably lose, because regardless whether you pick one, two or three stones from the pile, your opponent can pick three, two or one stone to ensure that, again, four stones will be left to you on your turn.
It is obvious that the same pattern repeats itself for n=4,8,12,16,\dotsn=4,8,12,16,…, basically all multiples of 44.
Complexity Analysis
Time complexity: O(1)O(1)
Only one check is performed. Space complexity: O(1)O(1)
No additional space is used, so space complexity is also O(1)O(1).
bool increasingTriplet(vector<int>& nums) { int c1 = INT_MAX, c2 = INT_MAX; for (int x : nums) { if (x <= c1) { c1 = x; // c1 is min seen so far (it's a candidate for 1st element) } else if (x <= c2) { // here when x > c1, i.e. x might be either c2 or c3 c2 = x; // x is better than the current c2, store it } else { // here when we have/had c1 < c2 already and x > c2 return true; // the increasing subsequence of 3 elements exists } } return false; }
We use here two pointers that gives us access to characters for swapping in left and right side of the string. For swapping we use a convinient C++ method swap. We just iterate over characters and swap them.
Time: O(n) Space: O(1)
Solution1 双指针交换
1 2 3 4 5 6 7 8
class Solution { public: void reverseString(vector<char>& s) { for (int i = 0, j = s.size() - 1; i < s.size()/2; i++, j--) { swap(s[i],s[j]); } } };
How's going Ladies - n - Gentlemen, today we are going to solve another coolest problem i.e. Top K Frequent Elements
Okay, so in order to solve this problem, first of all let's understand what the problem statement is:
Given an integer array nums, and an integer k, return the k most frequent elements. You may return the answer in any order. Okay, so wait wait listen just looking at this if you know the HashMap, you'll simply gonna say we can solve this problem easily using HashMap. And I'll say yes, exactly we gonna do the exact same thing but we will use Heap as well with HashMap, if you got the idea by listening heap. Then you had just solve the brute force approach
So, let's talk about it's
Brute Force Approach :- Let's take an example,
Input: nums = [1,1,1,2,2,3], k = 2 Output: [1,2]
So, we have 2 step's to perform in this problem:-
HashMap
Heap
Step -1 :- Make an Frequency map & fill it with the given elements
[1,1,1,2,2,3] ------------------------------------------------ | 1 ---> | 3 | | | | | 2 ---> | 2 | HashMap of Integer, Integer | | | | 3 ---> | 1 | ------------------------------------------------ Okay, so we just had completed our step no.1 now, it;s time to move to next step
Step -2 :- Make an MaxHeap & fill it with keys & on the peek of our Heap we will be having most frequent elements
HashMap :- Key Value 1 ----> 3 2 ----> 2 3 ----> 1 Heap :- | 1 | from the top of the heap we'll pop that no. of element requires in our array of k size | 2 | | 3 | ------------ Create result array res & store K frequent elements in it.
Heap :- | | res : [1] | 2 | | 3 | ------------ Heap :- | | res : [1, 2] | | | 3 | ------------ As, our K is 2 we gonna only store Most frequent K elements in our array, therefore in the result we get:- [1, 2]
I hope so, ladies - n - gentlemen, this approach is absolute clear, Let's code it, up
class Solution { public int[] topKFrequent(int[] nums, int k) { Map<Integer, Integer> map = new HashMap<>(); for(int i : nums){ map.put(i, map.getOrDefault(i, 0) + 1); } PriorityQueue<Integer> maxHeap = new PriorityQueue<>((a,b) -> map.get(b) - map.get(a)); for(int key : map.keySet()){ maxHeap.add(key); } int res[] = new int[k]; for(int i = 0; i < k; i++){ res[i] = maxHeap.poll(); } return res; } } ANALYSIS :-
Time Complexity :- BigO(K log D) as we are Poll K distinct elements from the Heap & here D is no. of distinct (unique) elements in the input array
Space Complexity :- BigO(D), this is the size of the heap.
Well, this is not a super efficient Approach, We can solve this more efficiently as well, now some of you'll ask but how!!
Well, for that we have Bucket Sorting
Optimize Approach :- Let's understand what bucket sort is,
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm.
In this process we gonna follow 3 major steps :-
Step - 1 : Create Frequency map: 1.1 Iterate thru the given nums[] array 1.2. With each iteration - check if map already contains current key If current key is already in the map just increase the value for this key Else add key value pair. Where key is current int and value is 1 (1 -> we encounter given key for the first time)
image
Step - 2 : Create Bucket List[]: index of bucket[] arr will represent the value from our map Why not use int[] arr? Multiple values can have the same frequency that's why we use List[] array of lists instead of regular array Iterate thrue the map and for each value add key at the index of that value
image
Step - 3 : If we look at bucket arr we can see that most frequent elements are located at the end of arr and leat frequent elemnts at the begining Last step is to iterate from the end to the begining of the arr and add elements to result List
image
I hope so ladies - n - gentlemen Approach is absolute clear, Let's code it up
The naive approach would be to iterate along the first array nums1 and to check for each value if this value in nums2 or not. If yes - add the value to output. Such an approach would result in a pretty bad \mathcal{O}(n \times m)O(n×m) time complexity, where n and m are arrays' lengths.
To solve the problem in linear time, let's use the structure set, which provides in/contains operation in \mathcal{O}(1)O(1) time in average case.
The idea is to convert both arrays into sets, and then iterate over the smallest set checking the presence of each element in the larger set. Time complexity of this approach is \mathcal{O}(n + m)O(n+m) in the average case.
Solution1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
class Solution { public: vector<int> intersection(vector<int>& nums1, vector<int>& nums2) { unordered_set<int> result_set; // 存放结果,之所以用set是为了给结果集去重 int hash[1005] = {0}; // 默认数值为0 for (int num : nums1) { // nums1中出现的字母在hash数组中做记录 hash[num] = 1; } for (int num : nums2) { // nums2中出现话,result记录 if (hash[num] == 1) { result_set.insert(num); } } return vector<int>(result_set.begin(), result_set.end()); } };
n == matrix.length n == matrix[i].length 1 <= n <= 300 -109 <= matrix[i][j] <= 109 题目数据 保证 matrix 中的所有行和列都按 非递减顺序 排列 1 <= k <= n2
进阶:
你能否用一个恒定的内存(即 O(1) 内存复杂度)来解决这个问题? 你能在 O(n) 的时间复杂度下解决这个问题吗?这个方法对于面试来说可能太超前了,但是你会发现阅读这篇文章( this paper )很有趣。
1 2 3 4 5 6 7 8 9 10 11 12 13 14
✔️ Solution 1: Max Heap keeps up to k elements
The easy approach is that we iterate all elements in the matrix and and add elements into the maxHeap. The maxHeap will keep up to k smallest elements (because when maxHeap is over size of k, we do remove the top of maxHeap which is the largest one). Finally, the top of the maxHeap is the kth smallest element in the matrix. This approach leads this problem become the same with 215. Kth Largest Element in an Array, which doesn't take the advantage that the matrix is already sorted by rows and by columns.
Complexity: Time: O(M * N * logK), where M <= 300 is the number of rows, N <= 300 is the number of columns. Space: O(K), space for heap which stores up to k elements. ✔️ Solution 2: Min Heap to find kth smallest element from amongst N sorted list
Since each of the rows in matrix are already sorted, we can understand the problem as finding the kth smallest element from amongst M sorted rows. We start the pointers to point to the beginning of each rows, then we iterate k times, for each time ith, the top of the minHeap is the ith smallest element in the matrix. We pop the top from the minHeap then add the next element which has the same row with that top to the minHeap.
class Solution { public: int kthSmallest(vector<vector<int>>& matrix, int k) { struct point { int val, x, y; point(int val, int x, int y) : val(val), x(x), y(y) {} bool operator> (const point& a) const { return this->val > a.val; } }; priority_queue<point, vector<point>, greater<point>> que; int n = matrix.size(); for (int i = 0; i < n; i++) { que.emplace(matrix[i][0], i, 0); } for (int i = 0; i < k - 1; i++) { point now = que.top(); que.pop(); if (now.y != n - 1) { que.emplace(matrix[now.x][now.y + 1], now.x, now.y + 1); } } return que.top().val; } };
编码规则为: k[encoded_string],表示其中方括号内部的 encoded_string 正好重复 k 次。注意 k 保证为正整数。
你可以认为输入字符串总是有效的;输入字符串中没有额外的空格,且输入的方括号总是符合格式要求的。
此外,你可以认为原始数据不包含数字,所有的数字只表示重复的次数 k ,例如不会出现像 3a 或 2[4] 的输入。
示例 1:
输入:s = “3[a]2[bc]” 输出:”aaabcbc” 示例 2:
输入:s = “3[a2[c]]” 输出:”accaccacc” 示例 3:
输入:s = “2[abc]3[cd]ef” 输出:”abcabccdcdcdef” 示例 4:
输入:s = “abc3[cd]xyz” 输出:”abccdcdcdxyz”
提示:
1 <= s.length <= 30 s 由小写英文字母、数字和方括号 ‘[]’ 组成 s 保证是一个 有效 的输入。 s 中所有整数的取值范围为 [1, 300]
1 2 3 4
This is the standard solution given under the tab 'Solution' of the same problem: https://leetcode.com/problems/decode-string/solution/ We start with the original string s and index = 0. If the index is not ']', meaning its a digit or alphabet (but not '[' -> this is ensured in the later part of the code). Also note that the string won't start with '[' so there is no chance of having it initially. First lets take the case when the string isdigit. In that case, count the number of times the inner string need to be repeated. For example 26[X], string X will be repeated 26 times and 26 is stored in k. Now increment the index -> this is done as we know that number will be accompanied by [. Now recurse on the inner string s and also increment index, this time for ]. While recursing for inner string, if the character is not digit, we store that and return it as a string. Now remember this inner string needs to be repeated 'k' times so we add that to current return string 'ret'. Return the ret string.
1. Easy C++ 2. Line by Line Explanation with Comments. 3. Detailed Explanation ✅ 4. Handwritten snap of an TestCase at the end of code given. 5. Please Upvote if it helps⬆️ 6. Link to my Github Profile contains a repository of Leetcode with all my Solutions. ⬇️ LeetCode
EXPLANATION
1. Deleting k digits means keeping n - k digits, where n is the total number of digits.
2. Use a stack that you keep sorted ascendingly. You remove elements from it as long as you can still make it to n - k digits, and your current element is smaller than the top of the stack:
push(2) => 2 push(4) because 2 < 4 => 24 push(6) because 4 < 6 => 246 pop() because 3 < 6 and we can still end up with 2 digits => 24 pop() for the same reason => 2 push(3) => 23 push(5) => 235 Then just take the first k digits => 23. Or you can make sure never to push more than k digits, and then the final stack is your solution.
3. Note that you cannot pop elements if that means you will not be able to build a solution of k digits. For this, you need to check the current number of elements in the stack and the number of digits to the right of your current position on the input number. Some More Points
1. Approach is simple. We need a number which is minimum, thus we need to remove the most significant digits first. For eg. if we have a number having digits 1-4 then 1234 would be the minimum and not 2314 or anything else. So in case of 2314, we remove 3 first, and then we go for 2 (Because they are more significant than 4). Observing this simple idea, we need to remove any digit which is greater than its following digit. Thats why we deleted 3 as it, was greater than 1 and similiarly 2 as it was also greater than 1.
2. In order to accomplish this, we use stack Data Structure where we pop the top if it is greater than current digit.
3. The conditions mentioned in while loop are important to avoid any Runtime Error. For eg. ["10001" 2] the answer is "0" but if we don't mention the condition !s.empty(), then the while loop will run on empty stack and try to pop the top which doesn't exist thus throwing RE.
Time Complexity :- O(N) // as we only traversing the string for once Space complexity:- O(N) // as we will store maximum of n digits in our string
Time Complexity : O(N^2) , Space Complexity : O(N)
Upvote if Found Helpful
Intuition :
So here if we carefully observe we can notice that if we take any person and insert any other shorter person before or after him in queue then the number of people higher or equal to him before him doesn't change. So we need to place the taller people first in the queue and we can add the shorter people in between or after them according to reqirement.
So we need to sort the array according to their height in decreasing order (Heighest at the first). Now think if 2 person has same height the person with lower (Ki) will be inserted before other. Because The person before will have at least 1 less number of equal or taller people before him than who is after.
So in case of equal heights (Hi) the person with lesser (Ki) will be placed first during sorting. Now if we insert the person from the sorted array in the ans array in the position (Ki) then the people greater than or equal to him will be in his left and they total will be (Ki) in number (From 0 to (Ki-1)th position). After that whatever we insert will be shorter (Which doesn't matter if inserted after or before the position) or if equal will have greater (Ki) than the present one i.e. will be inserted in the right.
So for each element in the sorted array we insert it at the (Ki)th position in the ans array. Example :
People = [[7,0], [4,4], [7,2], [5,0], [6,1], [5,4], [8,0]] Sorted People accordin to comp function : [[8,0], [7,0], [7,2], [6,1], [5,0], [5,4], [4,4]]] Now for each iteration in the loop ; (Ki of each man is included in ' ', and each inserted man in " ") man = [8,'0'] -> ans.insert(ans.begin()+'0', man) -> ans = ["[8,0]"] man = [7,'0'] -> ans.insert(ans.begin()+'0', man) -> ans = ["[7,0]", [8,0]] man = [7,'2'] -> ans.insert(ans.begin()+'2', man) -> ans = [[7,0], [8,0], "[7,2]"] man = [6,'1'] -> ans.insert(ans.begin()+'1', man) -> ans = [[7,0], "[6,1]", [8,0], [7,2]] man = [5,'0'] -> ans.insert(ans.begin()+'0', man) -> ans = ["[5,0]", [7,0], [6,1], [8,0], [7,2]] man = [5,'4'] -> ans.insert(ans.begin()+'4', man) -> ans = [[5,0], [7,0], [6,1], [8,0], "[5,4]", [7,2]] man = [4,'4'] -> ans.insert(ans.begin()+'4', man) -> ans = [[5,0], [7,0], [6,1], [8,0], "[4,4]", [5,4], [7,2]] See the final ans array fullfills all the conditions.
Solution1 先排序 再插队
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
class Solution { public: static bool cmp(const vector<int> a,const vector<int> b){ if (a[0] > b[0]) return true; if (a[0] == b[0] && a[1] < b[1]) return true; return false;
A palindrome consists of letters with equal partners, plus possibly a unique center (without a partner). The letter i from the left has its partner i from the right. For example in 'abcba', 'aa' and 'bb' are partners, and 'c' is a unique center.
Imagine we built our palindrome. It consists of as many partnered letters as possible, plus a unique center if possible. This motivates a greedy approach.
Algorithm
For each letter, say it occurs v times. We know we have v // 2 * 2 letters that can be partnered for sure. For example, if we have 'aaaaa', then we could have 'aaaa' partnered, which is 5 // 2 * 2 = 4 letters partnered.
At the end, if there was any v % 2 == 1, then that letter could have been a unique center. Otherwise, every letter was partnered. To perform this check, we will check for v % 2 == 1 and ans % 2 == 0, the latter meaning we haven't yet added a unique center to the answer.
Complexity Analysis
Time Complexity: O(N)O(N), where NN is the length of s. We need to count each letter.
Space Complexity: O(1)O(1), the space for our count, as the alphabet size of s is fixed. We should also consider that in a bit complexity model, technically we need O(\log N)O(logN) bits to store the count values.
26 DBabichev's avatar DBabichev 35937 Last Edit: August 9, 2021 10:37 PM
1.2K VIEWS
What we need to do in this problem is to perform usual schoolbook addition. We need to start to add numbers from the last elements and take care of carry and cases when one number has more digits that another. Imagine that we want to add two numbers: 986 and 47. Then we have the followint steps:
Add 6 and 7, so we have digit 3 and carry equal to 1. Add 8 and 4 and 1, so we have 3 and carry equal to 1. Add 9 from first number, and we do not have anything from second, so we choose 0 from second. Also we have curry equal to 1, finally we have digit 0 and carry equal to 1. We still have carry, but no digits left, so we evaluate 0 + 0 + 1 = 1. And now we can stop, we do not have digits and we do not have carry. Final number we constructed is 1033.
Complexity Time complexity is O(m + n), where m and n are lengths of our linked lists, space complexity is O(max(m, n)) if we count answer as memory or O(1) if we do not.
Approach #1: Brute Force [Accepted] Intuition and Algorithm
For each number in the given range, we will directly test if that number is self-dividing.
By definition, we want to test each whether each digit is non-zero and divides the number. For example, with 128, we want to test d != 0 && 128 % d == 0 for d = 1, 2, 8. To do that, we need to iterate over each digit of the number.
A straightforward approach to that problem would be to convert the number into a character array (string in Python), and then convert back to integer to perform the modulo operation when checking n % d == 0.
We could also continually divide the number by 10 and peek at the last digit. That is shown as a variation in a comment.
Complexity Analysis
Time Complexity: O(D)O(D), where DD is the number of integers in the range [L, R][L,R], and assuming \log(R)log(R) is bounded. (In general, the complexity would be O(D\log R)O(DlogR).)
Space Complexity: O(D)O(D), the length of the answer.
Let's try to repeatedly choose the smallest left-justified partition. Consider the first label, say it's 'a'. The first partition must include it, and also the last occurrence of 'a'. However, between those two occurrences of 'a', there could be other labels that make the minimum size of this partition bigger. For example, in "abccaddbeffe", the minimum first partition is "abccaddb". This gives us the idea for the algorithm: For each letter encountered, process the last occurrence of that letter, extending the current partition [anchor, j] appropriately.
Algorithm
We need an array last[char] -> index of S where char occurs last. Then, let anchor and j be the start and end of the current partition. If we are at a label that occurs last at some index after j, we'll extend the partition j = last[c]. If we are at the end of the partition (i == j) then we'll append a partition size to our answer, and set the start of our new partition to i+1.
Complexity Analysis
Time Complexity: O(N)O(N), where NN is the length of SS.
Space Complexity: O(1)O(1) to keep data structure last of not more than 26 characters.
Let's see how we can solve this using brute-force approach.
For each index i, we can just iterate over the array till we either find the the 1st index j such that T[j] > T[i] or reach the end of array. If we find j such that T[j] > T[i], we have ans[i] = j-i. Otherwise, ans[i] = 0
Time Complexity : O(N2), where N is the number of elements in the input array T. In the worst case, we iterate till the end of array for each index i. So, the total number of iterations required are N-1 + N-2 + N-3 +...+ 1 = N(N-1)/2 which is equivalent to O(N2) Space Complexity : O(1), ignoring the space required by the output array
✔️ Solution - II (Decreasing Monotonic Stack)
In the above solution, we can see that in the worst case, we are repeatedly iterating till the end of array either to find the next greater element at the very end or not finding it at all. This is redundant. We can optimize the above approach by observing that multiple elements can share a common warmer day. For eg. Consider [4,3,2,1,5]. In the brute-force, we would have iterated till 5th element in every case and assigned ans[i] as difference between the indices. However, we see that all elments share 5 as the next warmer day.
Thus, the pattern we see is that we iterate forward till we find a warmer day and that day will be the answer for all elements before it that are cooler (and not found a warmer day). Thus, we can maintain a stack consisting of indices of days which haven't found a warmer day. The temperature at these indices will be in decreasing order since we are only storing days that haven't found a warmer next day and hence it is known as decreasing monotonic stack. Whenever we find a current day cur which is warmer, we check elements of stack and assign them the difference of indices for all those elements which have temperature of corresponding day less than T[cur].
Thus, the algorithm can be summarized as -
Initialize an empty stack s and ans array of size N all initialized to 0s. Iterate over T from the start For each current day cur, check if today's temperature T[cur] is warmer than values corresponding to previous days' indices stored in stack (T[s.top()]). Assign answer for all elements of stack for whom current day's temperature is warmer (T[cur] > T[s.top()]) and pop them off the stack. Push cur onto the stack denoting that we need to find warmer next day for cur. All the elements present in the stack at end don't have a next greater element. We don't have to worry about them since ans is already initialized to 0. Thus, we can directly return ans.
Time Complexity : O(N), In the worst case, we require O(2*N) ~ O(N) iterations. Space Complexity : O(N), In the worst case, we may have decreasing elements in T and stack will store all N indices in it
✔️ Solution - III (Monotonic Stack - 2)
Another way of modelling the problem in terms of monotonic stack that some may find more intuitive is by starting from the end of the array. We again maintain a monotonic stack in this case as well which is similar to above appraoch, the only change is just that we start from the end.
This will again allow us to find the next warmer element for each index just by looking through the stack. Since we are maintaining a sorted stack (increasing from top-to-bottom), we know that the first element that we find in stack for curth day such that T[s.top()] > T[cur], will be next warmer day required for that element.
In the above image, we start from the end and each time assign next warmer day to be top of stack element. In the 1st approach, we instead started from beginning and only assigned next warmer day/s at once once we find a warmer day than all preciding elements. Both approach should work just fine.
The algorithm can be summarized as -
Initialize an empty stack s and ans array of size N all initialized to 0s. Iterate over T from the end. For each current day cur, pop values corresponding from stack that have temperature(T[s.top()]) lower than today's temperature T[cur], i.e, T[s.top()] <= T[cur]. This popping is done because these elements are cooler than T[cur] and occur later on than cur. Thus, they will never be a potential answer for elements on the left. Now that all elements lower than T[cur] have been popped off, stack s is either empty or has some element warmer than cur. If stack is empty, assign ans[cur] = 0 because no next warmer element exists for cur. Otherwise, assign ans[cur] = s.top()-cur, the difference between indices of next warmer element and cur. Then, Push cur onto the stack since it has potential to be next closest warmer day for remaining elements on left of T. Finally, return ans.
Traverse the array from 0 to n-2 (leaving the last character). Now if current char is 0, increment i by 1. Else increment by 2 as 1 is always followed by either 1 or 0. After the loop ends check if i is pointing to last char (i.e. n-1) or before that return true, else false.
In the brute force approach, we consider every possible subarray that can be formed from the given array numsnums. For every subarray nums[i:j]nums[i:j] considered, we need to check whether this is the smallest unsorted subarray or not. Thus, for every such subarray considered, we find out the maximum and minimum values lying in that subarray given by maxmax and minmin respectively.
If the subarrays nums[0:i-1]nums[0:i−1] and nums[j:n-1]nums[j:n−1] are correctly sorted, then only nums[i:j]nums[i:j] could be the required subrray. Further, the elements in nums[0:i-1]nums[0:i−1] all need to be lesser than the minmin for satisfying the required condition. Similarly, all the elements in nums[j:n-1]nums[j:n−1] need to be larger than maxmax. We check for these conditions for every possible ii and jj selected.
Further, we also need to check if nums[0:i-1]nums[0:i−1] and nums[j:n-1]nums[j:n−1] are sorted correctly. If all the above conditions are satisfied, we determine the length of the unsorted subarray as j-ij−i. We do the same process for every subarray chosen and determine the length of the smallest unsorted subarray found.
A brute-force way to solve this question is to take each number in range [1, n] and push it into ans array if it doesn't occur in nums.
C++
class Solution { public: vector<int> findDisappearedNumbers(vector<int>& nums) { vector<int> ans; for(int i = 1; i <= size(nums); i++) if(find(begin(nums), end(nums), i) == end(nums)) // linear search in nums for each i ans.push_back(i); return ans; } }; Python
class Solution: def findDisappearedNumbers(self, nums): return [i for i in range(1, len(nums)+1) if i not in nums] Time Complexity : O(n2), we iterate over the range [1, n] which takes O(n) and for each iteration, we check if that element occurs in nums which takes another O(n) giving total time of O(n2) Space Complexity : O(1), excluding the space required for the output vector, we only use constant extra space. The output space is generally not included in the space complexity.
✔️ Solution - II (Sort & Binary-Search)
Instead of linear-searching if every element in range [1, n] is present in nums or not, we could instead sort nums and then apply binary-search every time. If the element is not found in nums, we include it in ans.
C++
class Solution { public: vector<int> findDisappearedNumbers(vector<int>& nums) { sort(begin(nums), end(nums)); vector<int> ans; for(int i = 1; i <= size(nums); i++) if(!binary_search(begin(nums), end(nums), i)) // binary search in nums for each i ans.push_back(i); return ans; } }; Python
class Solution: def findDisappearedNumbers(self, nums): nums.sort() return [i for i in range(1, len(nums)+1) if nums[bisect_left(nums, i)%len(nums)] != i] Time Complexity : O(nlogn), we iterate over the range [1, n] which takes O(n) and for each iteration, we check if that element occurs in nums using binary search which takes another O(logn) giving a total time of O(nlogn) Space Complexity : O(sort), the only extra space required is the one used in internal sorting algorithm. Ignoring that space, we can say it to be O(1)
✔️ Solution - III (HashSet)
We can do even better if we just insert every element from nums into a hashset and then iterate over the range [1, n] and only add those elements to ans and are not present in hashset.
C++
class Solution { public: vector<int> findDisappearedNumbers(vector<int>& nums) { unordered_set<int> s(begin(nums), end(nums)); // insert every nums[i] in hashset vector<int> ans(size(nums) - size(s)); for(int i = 1, j = 0; i <= size(nums); i++) if(!s.count(i)) ans[j++] = i; // add all elements not found in hashset to ans return ans; } }; Python
class Solution: def findDisappearedNumbers(self, nums): s = set(nums) return [i for i in range(1, len(nums)+1) if i not in s] Time Complexity : O(n), we require O(n) time to insert all elements into hashset and another O(n) time to iterate over range and insert elements not present in hashset into ans, thus giving a total time of O(n). Space Complexity : O(n), required to maintain the hashset.
✔️ Solution - IV (Boolean array)
We can slightly optimize previous approach by using an boolean array of size n instead of hashset, since the range is known to be [1, n]
C++
class Solution { public: vector<int> findDisappearedNumbers(vector<int>& nums) { vector<bool> seen(size(nums)+1); vector<int> ans; for(auto c : nums) seen[c] = true; for(int i = 1; i <= size(nums); i++) if(!seen[i]) ans.push_back(i); return ans; } }; class Solution: def findDisappearedNumbers(self, nums): ans, seen = [], [False]*(len(nums)+1) for c in nums: seen[c] = True for i in range(1, len(nums)+1): if not seen[i]: ans.append(i) return ans Time Complexity : O(n) Space Complexity : O(n)
✔️ Solution - V (Placing Elements at Correct Index - Space Optimized)
This solution involves placing all possible elements at their right index place. By that, I mean every possible index i should be occupied by correct element i+1, i.e, num[i] = i+1. This allows us to check if a number j from range [1, n] exists in nums or not.
The numbers j will be present in nums only if the number j itself is present at nums[j-1] which is its correct index place. The numbers j' that are not present in nums wont have correct element (which is j' itself) at its correct index place nums[j'-1]. The numbers j that are not in nums wont have correct element at their right index place (nums[i-1]) and that index place would be occupied by some other element.
Now, Can we do this linearly using constant space? Yes!
We will iterate over each element of nums. For each element c, if the correct index place of c, i.e, nums[c-1] is not occupied by c, then we place c at its correct index place. But we dont want to lose number which was already present at nums[c-1]. So we swap it instead so the number at nums[c-1] occupies current element c & vice-versa. We placed original current element c at its correct place but now we have another element as c for which we need to place it at its correct place. So, repeat above step till c is at its correct place in nums. The steps 2 & 3 are repeated for all elements of nums so that we ensure every possible index is occupied by correct element. At last, the index not occupied by correct element are once which dont occur in nums. Let nums = [4,3,2,7,8,2,3,1]. The steps take place as -
1st for loop: for each value x, negate the element at xth position 2nd for loop: get all the positions that has a positive element. These are the missing values to return.
vector<int> findDisappearedNumbers(vector<int>& nums) { vector<int> ans; // 1st for loop: nums = [4,3,2,7,8,2,3,1] for(int i = 0; i < nums.size(); i++) // each iteration: { // i = 0 i = 1 i = 2 ... i = 7 int temp = nums[i]; // temp = 4 temp = 3 temp = -2 ... temp = -1 temp = (temp > 0) ? temp : -temp; // temp = 4 temp = 3 temp = 2 ... temp = 1 if(nums[temp-1] > 0) // nums[3] > 0 nums[2] > 0 nums[1] > 0 ... nums[0] > 0 nums[temp-1] *= -1; // [4,3,2,-7,8,2,3,1] [4,3,-2,-7,8,2,3,1] [4,-3,-2,-7,8,2,3,1] ... [-4,-3,-2,-7,8,2,-3,-1] } // 2nd for loop: nums = [-4,-3,-2,-7,8,2,-3,-1] for(int i = 0; i < nums.size(); i++) if(nums[i] > 0) // the 4th & 5th indexes are positive ans.push_back(i+1); // ans = [5,6] return ans; }
class Solution { public: int threeSumClosest(vector<int>& nums, int target) { //BRUTE FORCE APPROACH...(but a little improved) //Applying two pointers on the sorted array for target-currnumber //time complexity O(N^2) int ans; //...Our Final sum(of all three integers) will be stored here int prevDiff = INT_MAX; //...this will be the closest difference between target and the sum we calculated...this will be changed each time it is compared with a sum that is found to be lower than the previous closest sum int temp = target; sort(nums.begin(),nums.end()); //sorting the array inorder to carry out our two pointer approach for(int i =0;i<nums.size()-2;i++){ //loop till nums.size()-2 because we always play with three pointers two of which we don't want to get out of the array temp = target-nums[i]; //reducing the target to target-curr value(this curr value is one of the three integers we take into consideration) int start = i+1; // starting from one index ahead of curr index int end = nums.size()-1; int x; // for calculating and comparing with the new target value(temp)...this is similar as two sum problem.. // so the idea here is to get the sum of two integers as closest as (or equal to ) the new target(temp) which is done by // keeping two pointers and changing their positions so as to get as close as possible to our new target value. while(start<end){ x = nums[start]+nums[end]; if(x==temp) break; //breaking here ... cuz can't get any closer than the TARGET ITSELF! if(x<temp) start++; //incase the sum is lower we shift start to right side so as to increase sum and get closer to our target value(temp) else end--; } //now here after this traversal we have obtained a value of sum of three integers as close to temp as possible //this value is x + nums[i] //we take the difference of target value (original) with this sum and compare this difference with previous difference(because we need to be as close to target as possible right!) //if difference is less than prev diff we update our ans and prevDifference gets equal to this difference value.. if(abs(target-(x+nums[i]))<prevDiff){ ans = x+nums[i]; prevDiff = abs(target-(x+nums[i])); }
class Solution { public: int fourSumCount(vector<int>& nums1, vector<int>& nums2, vector<int>& nums3, vector<int>& nums4) { int count=0; unordered_map<int,int> mp; /* Philosophy 1. I know that Addtion have two parts in it EG (a +b , Part 1 - a, Part 2- b. 2.So, Lets make and find this dependency factors. How can I do it? 3. If there are 4 Sum. it means 2 sums is going to Part 1 and another 2 gonna be Part 2 which are dependent on Part 1 for 0 resultant. 4. I gonna store summation 2 nums1 in a FREQUENCY Hashmap. 5. Then I traverse 2nd part of the summation (rest to nums) and keep checking that do (0-it1-it2) is exist in map . 6. If yes, the add the frequency of Part1 int COUNT var. 7. return count; */ //Traversing Part 1 for (auto &it1: nums1) for (auto &it2:nums2) mp[it1+it2]++; // Traversing Part 2 for(auto &it3: nums3) for(auto &it4:nums4) if(mp.count(0-it3-it4)) count+=mp[0-it3-it4]; return count; } };
Intuition : Volume/Area depends upon the bar having minimum height
Now suppose, height[i] < height[j], in this case we can only store water with area height[i]*(j-i), now there is chance there is a greater value of i present in array so increment i Vice-vera height[i]>height[j], here area would be height[j]*(j-i), in this case there's chance a greater value of j is present in array so decrement j
A solution for beginners, which is straightforward, easy to understand, without too many complications and room to optimize once you understand the basic premise of the question. Hope this helps!
Time Complexity: O(n) Space Complexity: O(min of a,b) for the unordered set. a, is the upper bound of the space complexity. Where a: Size of the string b: Size of the number of characters in the character-set