↳
Compute the sum of the rectangles, for all i,j, bounded by (i,j), (i,m), (n,j), (n,m), where (n,m) is the size of the matrix M. Call that sum s(i,j). You can calculate s(i,j) by dynamic programming: s(i,j) = M(i,j) + s(i+1,j) + s(i,j+1) - s(i+1,j+1). And the sum of any rectangle can be computed from s(i,j). Menos
↳
Awesome!!
↳
The answer is already popular in computer vision fields!! It is called integral imaging. See this page http://en.wikipedia.org/wiki/Haar-like_features Menos
↳
What were the online coding questions like? Could you elaborate?
↳
Object detection. Is that what yours was?
↳
it is same as mine. Could you give me more details about the online coding? what algorithm did they test on object detection part? Menos
↳
Do you mind to share what are the hard leetcode questions they asked during the interview? Menos
↳
I dont think it's fair to share which question they asked. But the exact same question is on leetcode and the difficulty level is hard. Menos
↳
What topic you are being ask from in leetcode? also did they ask you system design and CS fundamentals. Menos
↳
Coded in python but wasn't able to finish it
↳
Can you elaborate on the question
↳
Given a matrix and coordinates of 2 rectangles calculate the weighted IoU in linear/constant time. Menos
↳
I don't think you can sort in O(logn) because you will need to go through the whole data at least once, making it O(n). Indeed, you can do it in O(logn) if the data is guarantee with some specific constrain or relationship. I think the best you can sort a completely random data is O(nlogn). Menos
↳
I didn't come up with the answer. it is not difficult, just not prepared
↳
what is the question
↳
If you do it backwards, you actually just need to compare the last greatest value against the next element, so should be o(n) Menos
↳
Just use monotonic stack , it will help to get the next greatest element for every element of the array on O(n) with a space of o(n) Menos
↳
O(n^2) solution rejected, then tried reverse search, but ran out of time
↳
There will be many documents in a document database. The labelling system must use machine learning to label into different categories. Eg help desk, system document, technical. There will a small train dataset available but not entirely reliable. Menos
↳
The correct answer would be to use a combination of weak learning methods and gradually incorporate feedback and make it stronger Menos
↳
APi rate limiter was really simple, just look at uber/ratelimit on git and thats it. Rest was farily easy Menos
↳
Mean-Square error is an error metric for measuring image or video quality it is popular video and image quality metric because the analysis and mathematics is easier with this L2-Norm metric. Most video and image quality experts will agree that MSE is not a very good measure of perceptual video and image quality. Menos
↳
The mathematical reasoning behind the MSE is as follows: For any real applications, noise in the readings or the labels is inevitable. We generally assume this noise follows Gaussian distribution and this holds perfectly well for most of the real applications. Considering 'e' follows gaussian distribution in y=f(x) + e and calculating the MLE, we get MSE which is also L2 distance. Note: Assuming some other noise distribution may lead to other MLE estimate which will not be MSE. Menos
↳
MSE is used for understanding the weight of the errors in any model. This helps us understand model accuracy in a way that is helpful when choosing different types of models. Check out more answers on InterviewQuery.com Menos
↳
Use a hash table or tree.
↳
modify merge sort
↳
sample outline of O(n log n) algorithm : a.sort(); b.sort(); list c={}; int i1=0,i2=0; while(true) { if(i1==n || i2==n) break; if(a[i1]==b[i2]) { c.insert(a[i1]); i1++; i2++; }else { if(a[i1] < b[i2]) i1++; else i2++; } } return c; Menos