Fragen im Bewerbungsgespräch: Software developer | Glassdoor.ch

# Fragen im Vorstellungsgespräch: Software developer

435

Fragen aus Vorstellungsgesprächen für software developer, von Bewerbern geteilt

## Top Vorstellungsgespräch-Fragen

Sortieren: RelevanzBeliebtheit Datum

21. Okt. 2010

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

22. Dez. 2010
 Komprimieren Sie eine gegebene Zeichenkette. Eingabe: aaaaabbccc Ausgabe: a5b2c311 AntwortenSimply iterate over the input and count the repeating chars and then append to the output.I would use Huffman code.#include #include using namespace std; int main() { string str; cin << str; int k=0; int shiftCount = 0; for(i=1;iMehr Antworten anzeigenThis is simple RLE compression, a loss-less compression format. NehaBugta's solution works using strings, a more efficient method would just use char *'s. Also, were you given the output because this isn't the most optimal RLE compression in practice. aa4bb1cc2 is still RLE where you don't look for the run unless a letter repeats itself. This works well when you have a lot of single character sequences. For example aaabaaacd. In basic RLE you would have a3b1a3c1ad1 but in modified you would have aa2baa2cd. Basically single runs only cost you 1 instead of two (cheaper), runs of 2 cost you two (same) and runs of 3+ cost you 3 (more expensive). So if you expect a lot of singles, you win out.There was also a follow up question: How would you decompress the compressed string. Here you need to introduce some symbol for "page break". For ex. a222: is it aaa....a (222 times) or aa22.Int counter=0; For (int i=1; i < str.length() ; i++){ if ( str[i-1]==str[i]){ counter++; } else { cout<In Java: public static String compress(String s) { int len = s.length(); StringBuilder res = new StringBuilder(len); char prev = 0; int count = 0; for (int i = 0; i < len; i++) { char c = s.charAt(i); if (c != prev && i != 0) { res.append(prev); res.append(count); count = 1; } else { count++; } prev = c; } res.append(prev); res.append(count); return res.toString(); }public static void compress(String s) { //97 -122 String result=""; int count=0; int[] a= new int; for (int i=0;i<=28;i++) { a[i]=0; } for(int j=0; j< s.length() ;j++) { a [s.charAt(j)-'a']++; } char c = 0; for(int i=0; iint cnt=0 char cc=0 char* cp=str while(char c = *cp++) {I would split the problem in 2 parts: first get a repetition count for a single char (or token), then iterate until the end of the string (or stream) is reached, and build an output string (or stream) as a side effect. int count_rep(const char* str) { int res=0; char cur=*str; while((char c = *str++) && c == cur) res++; return res; } char* compress(const char* str) { char* res = new char[strlen(str) * 4]; for(;;) { int cplen = count_rep(str); if(cplen == 0) break; res += sprintf(res, "%c%d", *str, cplen); str += cplen; } *res = '\0'; return res; }I have a solution that compresses the given string and also addresses the matter of proper decompression of page breaks without extra space. The idea is the following: We only need 255 integers to represent all chars in the ASCII table. If we ask the interviewer and they tell us that we are only interested in those chars, then we can use the most important 8 bits of a standard 16-bit char (e.g. in java) to store the count, and the 8 less important bits to store the character. The code is: char prev = toCompress.charAt(0); for(int i = 0; i > 8; int findChar = comp & (int)Math.pow(2, 8)-1; System.out.println((char)findChar+""+findTimes); }

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

18. Juni 2011
 Ich sollte in einer sortierten Reihung die Nummer finden, die einer vorgegebenen Nummer an nächsten lag.8 AntwortenShould be done with binary search, but I messed up the implementationHere's a possible solution in java with O(logN) avg runtime complexity and without recursion to avoid method call overhead. public static int findNearestValue(int arr[], int value) { int start = 0; int end = arr.length - 1; while (start < end) { int midLow = start + (end - start) / 2; int midHigh = midLow + 1; if (Math.abs(value - arr[midLow]) < Math.abs(value - arr[midHigh])) { end = midLow; } else { start = midHigh; } } return arr[start]; }The above code will fail in case array has duplicates. ex 2 6 6 6 6 6 6 6 16Mehr Antworten anzeigenstart with low = 0 and high = arr.length-1 static int findNearestValue(int value, int arr[], int low, int high) { int result = -1; if(high-low>1){ int mid = low + (high-low)/2; if(arr[mid]>value) result = findNearestValue3(value, arr, low, mid); else if(arr[mid]Here's a more optimized version (you don't need to do this recursively, and you don't need to include the middle in your next slice of the array). public static int findNearestValue(int[] arr, int value) { int mid; int low = 0; int high = arr.length-1; while (low arr[mid]) low = mid + 1; else return arr[mid]; } int diffLow = abs(value - arr[low]); int diffHigh = abs(value - arr[high]); if (diffLow < diffHigh) return arr[low]; // Else return arr[high]; }Here is a recoursin solution public static int binSer(int i, int j, int num, int[] arr){ if (i > j){ return -1; } int mid = (i+j)/2; if (arr[mid] == num){ return mid; } if (arr[mid] < num){ return binSer(mid+1,j,num,arr); } else { return binSer(i,mid,num,arr); }#include #define moddiff(a,b) ((a > b) ? (a-b) : (b-a)) #define uint unsigned int /* test case : sample array */ uint arr[] = { 1, 4, 9, 16, 25, 36, 49 , 64, 81 }; /* search for nearest num to key in a sorted array */ uint nrst_num(uint arr[], uint lo, uint hi, uint key) { uint mid = 0; uint mid_parent = 0; while (lo arr[mid]) { lo = mid + 1; } } uint ldiff = moddiff(key, arr[lo]); uint mdiff = moddiff(key, arr[mid]); uint hdiff = moddiff(key, arr[hi]); uint mid_parent_diff = moddiff(key, arr[mid_parent]); /* select the index from the lowest diff */ if ((mid_parent_diff <= mdiff) && (mid_parent_diff <= ldiff) && (mid_parent_diff <= hdiff)) { return mid_parent; } else if ((mdiff <= mid_parent_diff) && (mdiff <= ldiff) && (mdiff <= hdiff)) { return mid; } else if ((ldiff <= mdiff) && (ldiff <= hdiff) && (ldiff <= mid_parent_diff)) { return lo; } return hi; } int main() { /* test case */ uint key = 0; printf(" { 1, 4, 9, 16, 25, 36, 49 , 64, 81 }"); uint res = nrst_num(arr, 0, 8, key); printf (" nearest point to key=%d is val=%d \n", key, res); }def find_nearest(numbers, xs): l = 0 r = len(numbers)-1 if r < 0: return None if l == r: return numbers while l < r - 1: m = (l + r) / 2 x = numbers[m] if x == xs: return x elif x < xs: l = m else: r = m dist1 = xs - numbers[l] dist2 = numbers[r] - xs if dist1 < dist2: return numbers[l] else: return numbers[r] print find_nearest([1,2,5,7,10,16,20,34,35,36,100], 26)

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

18. Dez. 2010
 Wie erstellt man einen verteilten Algorithmus, mit dem sich die Verteilung der Klammern berechnen lässt?7 AntwortenDivide and conquercalculate in parallel count of occurrences of left paren and ditto for right paren. Consolidate. The counts should equal.Consolidating isn't enough:, e.g. chunk1 = "(()" and chunk2="))(". When summing up we get same amount of right and left parenthesis, while the original expression is not balanced. Each parallel process should report sequence of left and right frequencies, in order of appearance.Mehr Antworten anzeigenWorker node should eliminate all balanced subsequences from input and return that as a result + number of leading ) and ending (. Main node split our huge strings on numered blocks and pass that to the worker nodes. There could be 2 options: Nodes reduce blocks so we basicaly concatenate them to fit our max block size (resplit), and pass them to the nodes, another option is that our splitting would be not very nice and size of blocks did not change. For example, we cannot reduce n such things independantly )))))))......((((((((( , but we can resplit them to get ((((((((.....))))))))) n - 1 such thing and ))).. at begin + (((.. at end.*split our huge stringJust give each node a piece of the string, doesn't matter how big, and do the following: Turn all the ( to +1 and all the ) to -1 so ie: (() -> +1 +1 -1 Now generate two numbers min and sum, min being the minimal temporary sum when going from left to right, and sum being the total sum of the values for example (() -> +1 +1 -1 (sum = +1 min = +1) ))() ->-1 -1 +1 +1 (sum = 0 min = -2) Now, have all nodes send their (sum, min) pairs to a master node. (Assume that it will be sorted, if not, add an id specifier and sort on the master node) I will write the master node's role in python: balance = 0 for sum,min in sum_min_pairs: if balance + min < 0: return False else balance += sum #edge case if balance != 0: return False else: return True If someone had trouble understanding, for each block, min is the minimal number of "(" required in the WHOLE string before this block, so if balance (total number of "(" remaining) + min < 0, the expression is invalid. If that is not the case, sum on the other hand is how this block changes the balance, so just add it to balance and move on. ;)Oh, and you don't have to send them all to the master node right away, you can do this in layers, but the algorithm should be changed a little bit.

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

5. Dez. 2012
 Die Frage, mit der ich die meisten Schwierigkeiten hatte (die zweite Frage war wahrscheinlich noch ): Finden Sie die zehn meistbesuchten URLs in einem großen Netzwerk von Computern, in dem jeder einzelne die Logfiles der besuchten URLs speichert. (zum Beispiel nehmen Sie viele lange int (Besuche)> Maps, berechnen Sie die eingeschlossene int (Summe der Besuche über alle verteilten Maps), und erhalten Sie so die zehn meistbesuchten URLs in der kombinierten Map) Die Ergebnisliste muss genau sein und die Maps sind zu umfangreich, um sie über das Netzwerk zu übermitteln (vor allem ist es nicht erlaubt, sie alle an den zentralen Server zu senden oder direkt mit MapReduce zu arbeiten)8 AntwortenBrute force: O(m*n), where m is number of unique urls and n is no. of servers. Take each url from a server and construct the sum of hits by querying each server (O(1) for each server therefore O(n) for n servers). You do it for m unique urls and use a link list to maintain top 10. Is there a better way to do this?An idea I got in the meantime: Number the servers 1..n, make a hash url => [1..n], send url from each server to server corresponding to url. Still O(m*n), if every server has every url, but only O(m), if every url is only on one serverMehr Antworten anzeigenProbably, the key of this task is following: we don't need all maps on the central server. I mean we would take only top 20-50 elements from each server (20 or 50 is the question of the probability and the tuning) and to order all of them. It is possible to add more math in this algorithm, just to control it correctness, but I am not sure that it is really necessary...It would be useful to know something about the general format of URLs. Anyway, assuming that each computer hosts a small number of web services, it would result in log files that have many similar prefixes, so a radix trie could be used to store all that information in a much more compact form. In that form, information could be sent to central server, where it can be merged with running total trie.The maps are too large to send over the network in their entirety but I assume it is okay for each server to send their top 10 most visited sites. So you have each server compute their top 10 most visited sites and send that to a central server. The central server can then compute the global top 10 list. In map reduce terms, you have each mapper compute their top 10 list and then the reducer compute the global top 10 list based on the output of all the mappers.@foobarbaz: That was what I said, but it was wrong. There must not be a central server! There are far too many other servers to do this@foobarbaz there is also another problem. The top-ten list might contain URLs that are not top-10 in individual servers. A URL visited only once by each and every server in the large network might end up being in the top 10. At least that's one possibility that we should ask the interviewer. Einer oder mehrere Kommentare wurden entfernt. Weitere Informationen finden Sie in unserem Verhaltenskodex und in unseren Nutzungsbedingungen.

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

19. Nov. 2010
 1. Ausgehend von einem sortieren Array A[1..n] mit n Integer-Zahlen und einem integer t: Finden Sie alle Paare (x,y) von Elementen in A, so dass x+y kleiner ist als t. 2. Lässt es sich besser lösen, wenn man nach (x,y) sucht, für die x+y=t gilt? 7 AntwortenMy solutions: 1. You can do that in time O(n log n) and O(1) space. 2. Yes, the cost can be reduced to O(n) time using O(n) additional space.Suppose t is so large that all pairs x+y are smaller than t. If we are to find ALL pairs, then the size of the solution is O(n^2). It's going to take O(n^2) time just to generate the output: for (i = 0; i < n - 1 && A[i] + A[i+1] < t; i++) for (j = i + 1; j < n && A[i] + A[j] < t; j++) printf("%d and %d sum to %d\n", A[i], A[j], t); For the second question (x+y=t) you can put all elements of A into a hash table and then lookup t-x in O(n) time and space. Be careful to skip the case x+x=t unless x actually does appear twice in A.Yeah, good point! Probably the question was to return the number of such pairs. Thanks for making me notice it.Mehr Antworten anzeigenThe generalized version of this problem is worth studying: the Subset Sum problem which is recognized as being an NP complete decision problem: http://en.wikipedia.org/wiki/Subset_sum_problemArray A[i...n]. t is the given value. Create a hash of size t assuming t <= n and 1 < t. Iterate the array to create a hash O(t) (t < n). The key of the hash should be the value A[i] and value of the hash should be (t - A[i]). Iterate again over the array again (which is O(t)). Look up every key in hash matching the value of A[i]. Check if there is an existing key in the hash table which matches value of the A[i]. The hash lookups are constant time. The overall complexity is 2n.for 2) there is no need for hashing because the array is already sorted. A simple solution in O(N) is achieved by having 2 pointers: one at the beginning and the other at the end of the array. Then at each step one of the pointers is increased/decreased Array A[0..n-1], val t i = 0 , j = n-1; while (i t ) // x is too large, so decrease the largest value (A[j]) j--; else // x is too small, so increase the smallest value (A[i]) i++; }1) O(nlogn) solution is as follows (without having to print the solutions, just count the number of possible solutions): a. binary search for largest element in array that is less than t, say k b. for each element arr[i] in subarray [0..k] do binary search for largest element less than t-arr[i] if found within sub-array's boundaries, every element left of it can be paired with arr[i] .

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

8. Juli 2010
 Was ist der schnellste Weg, 1 Million Integer zu sortieren, wenn alle im Bereich [1,100] sind?6 AntwortenBucketsort (used only when you know the range of your input, and the range is reasonably sized)Counting sort (also known as math sort).Use bitmap. void sort(int[] ints) { // Initialize an array of 100 ints int[] buff = new int; for (int i = 0; i < buff.length; i++) buff[i] = 0; // Scan the input ints once, count them. // Cost: O(N) for (int i = 0; i < ints.length; i++) buff[i]++; // Output the sorted ints // Cost: O(N) for (int value = 0, i = 0; value < buff.length; value++) { for (int j = 0; j < buff[value]; j++) ints[i] = value; } } }Mehr Antworten anzeigenNone of the above. You should use radix sorting, where r=100.Is this some sort of trick question? If not, then set up an array like this: int [] sorted = new int; Now iterate over the input number sequence, placing values into their respective array elements, i.e. let N be the number, then: if (N > 100 || N gapless = new ArrayDeque(sorted.length); for (int i: sorted) { if (i > 0) gapless.add(i); } return gapless; Complexity is now O(n). Am I missing something here?This can be done in linear time with constant extra space which is O(100). The idea is to create an array of length 100 and in each place store the number of times the respective integer appears in the input integer. Then iterate over this 100-length array and whenever an element is larger than zero, put it in the next available index of the original input array. Code is: public int[] sortMany(int[] p){ if(p.length==0) return p; int[] a = new int; for(int i = 0; i 0){ for(int j = 0; j < a[i]; j++){ p[pindex] = i+1; pindex++; } } } return p; }

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

9. Feb. 2015
 Ist ein 2D-Feld gegeben, das ein Feld beschreibt (0 = Wasser, 1 = Land), schreiben Sie einen Algorithmus, der die Anzahl der Inseln zählt (zwei Stücke Land berühren sich, wenn sie entweder vertikal oder horizontal nebeneinander liegen, nicht diagonal).6 Antwortenpublic class Solution{ // time complexity is O(mn) public int countIslands(int[][] nums) { int count = 0; int m = nums.length; if(m == 0) return 0; int n = nums.length; if(n == 0) return 0; for(int i = 0; i < m; i ++) { for(int j = 0; j < n; j ++) { if(nums[i][j] == 1 && check(nums, i, j)) { count++; } } } return count; } // nums[i-1][j], nums[i][j-1], nums[i+1][j], nums[i][j+1] public boolean check(int[][] nums, int i, int j) { int m = nums.length; int n = nums.length; boolean result = true; result = result && (i == 0 || nums[i-1][j] == 0); result = result && (i == m-1 || nums[i+1][j] == 0); result = result && (j == 0 || nums[i][j-1] == 0); result = result && (j == n-1 || nums[i][j+1] == 0); return result; } } Any better ideas?I think the problem with your algorithm is it counts only 1x1 islands, in such case I don't see a problem and it does it very well. However I think the question is about the islands that may be much bigger. I would probably do it this way: Start by creating a mirror array with all 0s. Then start exploring the input array, everytime I get to water, I will mark this place as visited in the second array. As soon as I will find a land, I would call an extra function to traverse the island. After it completes, add this island. That function would be easy, in case it was not already marked in the mirrored array and it is land, it would call itself recursively on each of the 4 directions and mark this place as visited, otherwise only mark this place as visited. What do you think?You can use a Disjoint-Set data-structure (Union find).Mehr Antworten anzeigenIf you can change the matrix object, each time we get 1(root), we increase count and we do DFS/BFS and change every 1 connected to the root to 0. At the end count will have number of islands.I would recommend to following approach: 1) Visualize some examples of the problem in a spreadsheet. You will see that some questions arise... 2) Ask questions to find out more about the problem. Examples: "Are there any constraints to the size or shape of the 2d map?", "Do i also need to detect islands with more than 1 fields?", "Do I need to detect islands within islands?", "Are islands which are directly at the border of the map really islands or peninsulas?" 3) Write down the constraints. 4) Write unit tests based on the examples you've figured out in 1). Explicitly focus on special cases. 5) Try to figure out your solution and write it down in pseudocode. 6) Start coding your solution. You can easily verify it with your unit tests (if you wouldn't code in an google doc :) ) 7) Once finished, refactor your solution. I've come to the following solution for the problem (python). # It can handle islands with n-fields, Doughnut and islands on islands. # Constraints: fields which border to the edge of the map are peninsulas def count_islands(_map): islands = 0 width = len(_map) height = len(_map) r = 1 while r = 0 and _f = 0: # if in range if(_map[_r][_f] == 1): # if land if check_island(_map, _r, _f, width, height, _next) == False: return False else: return False # edge of world return True def test(): island_on_island = [[0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 1, 0], [0, 1, 0, 0, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 0, 0, 1, 0], [0, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0]] assert(count_islands(island_on_island) == 2) # further tests.... return "*** all tests pass! ***" print(test())# should only be called on a field which isn't water def check_island(_map, r, f, width, height, prev=None): # if this field is water then something went wrong if(_map[r][f] == 0): raise "FieldIsWaterException" # if a land field has been checked once, no further need to check it _map[r][f] = 2 _checks = [["LO", r+1, f, "UP"], # check lower field ["LE", r, f-1, "RI"], # check left field ["RI", r, f+1, "LE"], # check right field ["UP", r-1, f, "LO"]] # check upper field for [_prev, _r, _f, _next] in _checks: if prev != _prev: # if not previous field if _r = 0 and _f = 0: # if in range if(_map[_r][_f] == 1): # if land if check_island(_map, _r, _f, width, height, _next) == False: return False else: return False # edge of world return True

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

29. Sept. 2010
 Wie würden Sie eine Reihung von einer Million Zahlen sortieren?6 AntwortenThe trick here is to ask whether the entire array fits into memory all at the same time: - If it fits in memory all at once, use quicksort - O(n log n) - If it does not, divide the array into chunks that does, sort the chunks individually using quicksort and store them on the disk. Then read the sorted lists, merge them and save them on the disk - O(n log n) + n = O(n log n)For some bonus points you can discuss how to implemented sorting using multiple machines. - Send chunks of the array to multiple machines. - On each machine perform merge sort (or quick sort, if the chunk fits in memory). - Odd-numbered machines send their sorted chunks to even-numbered machines - Even-numbered machines merge their own chunks with the ones sent to them - Repeat the same with machines numbered in multiples of 4, 8, etc, until the final merge is performed. If the number of machines is not a power of two, some machines will have no neighbor to send their chunk to. They will be idle in the next iteration and should change their number to that of their missing neighbor.One million integers is less than 4Mb can probably fit into memory. I would further ask about the rexpected range of numbers. If it is sufficiently small (e.g. ages, SSNs, etc) then sort could be done in O(n) time.Mehr Antworten anzeigenI will use "Merge Sort" and map-reduce it (or use any grid based framework...or code my own stuff). Do you guys think it's right approach?Radix sort, 32-bit numbers 32 * O(n)While radix sort is indeed 32*O(n) assuming 32-bit integer input, O(nlogn) of quicksort, mergesort and heapsort is better, because for n=1,000,000, logn = 19 which makes the comparison based sorts roughly 19*O(n). However, comparisons are assumed to be constant time, but this is not the case when the integers that are being compared are not randomly chosen. This means that when quicksort, mergesort etc converge to comparing closely located integers, the comparisons take more time than when comparing random integers. So under several assumptions radix sort can be better than comparison-based sorts, as long as the numbers are not clustered together.

### Ein Bewerber für eine Stelle als Softwareentwickler bei Google wurde gefragt...

19. Nov. 2010
 Man bekommt eine Textdatei, die zu groß zum Merken ist und 3 Strings A, B und C. Für jeden String hat man eine sortierte Reihung, die die Positionen der Strings in der Datei auflistet (z. B. umgekehrte Indizes). Man soll das kleinste Fenster finden, das d5 AntwortenI have provided a solution using time O(sum of lengths of the arrays) and space O(1).Maintain 3 position counters, one for each array. At every iteration, increment the counter of the array containing the minimum value. This is a greedy algorithm that tries to minimize the distance between the 3 elements.@Arnab: I'm not sure if that will work. I think it will not work for cases like this A = 1, 2, 1000000000 B = 3, 4, 5, 6, 7, 8 C = 9 Although I am curious if there is a solution that is better than O(n^2 logn). One way it could be solved is to compute all of the possible pairs between the 2 shortest lists (n^2). Then perform a binary search on the 3rd list (log n).Mehr Antworten anzeigenHere is the naive implementation in Java. It is naive because it does not try to be efficient, it just iterates on all the possible combinations and returns the smallest distance. At least it works, it's better than nothing! public class Test { public static int[] findSmallestWindow(int[] a, int[] b, int[] c) { int[] res = new int; int window = Integer.MAX_VALUE; for (int i = 0; i < a.length; i++) { for (int j = 0; j < b.length; j++) { for (int k = 0; k < c.length; k++) { int[] sorted = sort3(a[i], b[j], c[k]); int curWindow = sorted - sorted; if (curWindow < window) { res = i; res = j; res = k; window = curWindow; } } } } return res; } public static int[] sort3(int a, int b, int c) { int[] res = new int; if (a <= b) { if (b <= c) { res = a; res = b; res = c; } else { if (a <= c) { res = a; res = c; res = b; } else { res = c; res = a; res = b; } } } else { if (b <= c) { if (a <= c) { res = b; res = a; res = c; } else { res = b; res = c; res = a; } } else { res = c; res = b; res = a; } } return res; } public static final void main(String[] av) { int[] a = new int[] {1, 2, 1000000000}; int[] b = new int[] {3, 4, 5, 6, 7, 8}; int[] c = new int[] {9}; int[] res = findSmallestWindow(a, b, c); System.out.println("Result: a[" + res + "]=" + a[res] + " b[" + res + "]=" + b[res] + " c[" + res + "]=" + c[res]); System.out.println(""); } }Here is a solution in O(|A| + |B| + |C|) Let's call the 3 arrays: Aa, Ab, Ac Consider having 3 cursors called Ca, Cb, Cc, one for each array. In pseudo C++ : int Ca = Cb = Cc = 0; int bestWindow = INT_MAX; int best = {-1, -1, -1}; while(Ca < Aa.size() && Cb < Ab.size() && Cc < Ac.size()) { // find max and min values amongst data pointed by cursors int maxVal = max(Aa[Ca], max(Ab[Cb], Ab[Cc])); int minVal = min(Aa[Ca], min(Ab[Cb], Ab[Cc])); // if we found a better window update best window variables if((maxVal - minVal) < bestWindow) { bestWindow = maxVal - minVal; best = Ca; best = Cb; best = Cc; } // move the cursor pointing to smallest value // This is actually not as trivial as it seems. If someone has a good idea I am interested. // Here I don't provide the code. Cx++; }
110 von 435 Fragen im Vorstellungsgespräch

Mehr