3

I've been going over previous tech interviews I've had (got another one coming up).

The problem

Anyway, one question I had was...

Given 2 unsorted arrays how would you find all of the common objects between them?

Say I have array A and B. Worst case A and B are both of size n.

Initial thought

Initially my thought was to iterate A and do a linear search through B.

The complexity for this is O(n) * O(n) = O(n^2).

Sorting first

However, I was wondering if it would be better to sort B first.

Using a quick sort (or merge sort) on B is O(n log(n)). This is done once.

Now you can iterate A O(n) and do a binary search on B O(log(n)) for each A.

So the complexity is (sort) O(n log(n)) + (iterate A) O(n) * (search B) O(log(n)) which simplifies down to O(n log(n)).

My question is. Am I right with this? I am very new to complexity analysis so wanted to check that I'm not doing anything stupid.

Best solution

Is the best solution then to sort one array first before iterating the other? You could sort both arrays and then iterate but you're not improving O(n log(n).

Is there another better way of approaching this?

  • The obvious data structure to use would be two hash tables, since they have an amortized constant look-up cost. Is there any reason why you can't use one? – Kilian Foth Feb 23 '15 at 12:03
  • @KilianFoth How would you do that? Sorry for the stupid question but if you could point me in the right direction. – Fogmeister Feb 23 '15 at 12:14

3 Answers3

2

Why not use a Hash Set? Most implementations of the add() method, which is used to inject items in the set return a boolean indicating if an item was inserted or not. Insertion will return false if the item was already there. So your code would look like:

HashSet<int> set = {}
Array<int> items1 = ...
Array<int> items2 = ...

foreach (item in items1)
    set.add(item)

foreach(item in items2)
    if(set.add(item) == false)
        out >> item is duplicate

This would yield a time complexity of 2n, since hash sets have a constant O(1) fetch time.

If I remember correctly, O(2n) (which reduces to O(n)) would be less than O(n log(n)). This assumes that you do not need to optimize in terms of space.

npinti
  • 1,654
  • Actually, that would be O(n) as constants are ignored as n tends to infinity :D Thanks, makes a lot of sense. – Fogmeister Feb 23 '15 at 13:08
  • @Fogmeister: Yes, I just did not want to induce any confusion as in where did the 2 go. – npinti Feb 23 '15 at 13:13
  • Ah, no worries :D I've been cramming everything about complexity analysis. LOL! – Fogmeister Feb 23 '15 at 13:14
  • 1
    @Fogmeister: Yeah I was trying to create an answer which might not confuse future readers ;). Updated the answer. – npinti Feb 23 '15 at 13:22
  • Hash tables have a worst case access time of O(n). That means that this solution has a worst case run time of O(n^2) – Malt Feb 23 '15 at 13:29
1

You can't possibly do better than O(n), since you have to examine each element at least once to ensure there is a match or not. My first choice would be to convert the arrays to sets, which is O(n), then take their intersection, which is O(n) on the smallest set. In python:

set(array1) & set(array2)

If you have to do it in place, which is sometimes a restriction in these sorts of exercises, your solution is pretty good.

Karl Bielefeldt
  • 147,435
  • Excellent thanks. I've found in Obj-C I can use NSMutableSet and the intersectSet method which is what you are doing here in python. Thanks – Fogmeister Feb 23 '15 at 13:11
  • 1
    Set intersection works in O(n^2) – Malt Feb 23 '15 at 13:28
  • @malt, only a very naive implementation. If testing for set membership is O(1), which most implementations are, then you just have to loop through the smaller set and test each member for membership in the other set. – Karl Bielefeldt Feb 23 '15 at 14:37
  • And how would you test for set membership in O(1) in the worst case? – Malt Feb 23 '15 at 14:44
1

I don't think that there's a better answer for worst case complexity in the general case. You could probably only improve specific cases in which the input is somehow limited (say numbers between 1 and N). In that case you could use something like a Radix sort.

Karl Bielefeldt's idea of converting the arrays to sets doesn't solve the problem, it only hides it since intersecting sets (in the general case) is done in O(n^2). For instance, here's Java's retainAll() implementation:

public boolean retainAll(Collection<?> paramCollection) {
        int i = 0;
        Iterator localIterator = iterator();
        while (localIterator.hasNext()) {
            if (!(paramCollection.contains(localIterator.next())))
                ;
            localIterator.remove();
            i = 1;
        }

        return i;
    }

Note the iteration on the incoming collection. For each element, the method calls contains, which iterates over the elements of the current set:

public boolean contains(Object paramObject) {
        Iterator localIterator = iterator();
        if (paramObject == null)
            while (true) {
                if (!(localIterator.hasNext()))
                    break label53;
                if (localIterator.next() == null)
                    return true;
            }
        while (localIterator.hasNext()) {
            if (paramObject.equals(localIterator.next()))
                return true;
        }
        label53: return false;
    }

npinti's solution of using a HashSet is equally problematic. A HashSet doesn't guarantee O(1) fetch time in the worst case. In fact, if the hashes of all N elements collide, you're looking at O(n) fetches/inserts. This brings us back to O(n^2) worst case time.

Malt
  • 164
  • Ooh, new info. I like this. Can you point me at something about the HashSet not being O(1) for fetching? Thanks :D – Fogmeister Feb 23 '15 at 13:16
  • Nevermind, found this. http://stackoverflow.com/questions/8113246/is-objectforkey-slow-for-big-nsdictionary Which confirms what you said. (ish). Absolute worst case is O(n log(n)) for fetch. That makes it no better than my sort and search. – Fogmeister Feb 23 '15 at 13:20
  • @Fogmeister it's the basics of hash tables. You use the hash of the elements as a key which points to a bin of elements with that hash. Since in the worst case no one can guarantee that the hashes of the different elements are unique, you might end up with all your elements in the same bin. That means that for every fetch you have to go over the whole bin - O(n) – Malt Feb 23 '15 at 13:20
  • 2
    @Fogmeister You can read about it on wikipedia - https://en.wikipedia.org/wiki/Hash_table Look at the upper right side of the page, there's a table with average and worst times. – Malt Feb 23 '15 at 13:21
  • 1
    Why would I care about worst case complexity if the worst case is sufficiently unlikely? – CodesInChaos Feb 23 '15 at 14:20
  • @CodesInChaos in a real world scenario, I'd ensure that my hashes are good enough (which isn't trivial by the way), and use the HashSet solution myself. But in computer science, worst case time complexity is the most common yardstick for comparing algorithms (see https://en.wikipedia.org/wiki/Time_complexity) – Malt Feb 23 '15 at 14:43
  • It's technically correct, but misleading to call a hash lookup O(n), since that is a theoretical worst case that's extremely unlikely in practice. – Karl Bielefeldt Feb 23 '15 at 14:45
  • @KarlBielefeldt 1. I never said that hash lookups are O(n). I said that their worst case is O(n). 2. It's actually not that unlikely in practice either. I've seen cases of HashMaps becoming performance bottlenecks due to collisions since most hash functions people use in practice, well.. suck; and the elements being inserted are strongly correlated. Moreover, in Java, the hash code is only 32 bit long.

    Apache Commons even has a special utility class for creating good hashcodes: https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/builder/HashCodeBuilder.html

    – Malt Feb 23 '15 at 15:21
  • @KarlBielefeldt Also, this is was clearly a theoretical question, not a practical one. And in theory, algorithms are usually compared according to their worst case performance. In practice, as I said, I'd probably use HashSets myself. – Malt Feb 23 '15 at 15:23
  • Even in computer science you can use average complexity. – CodesInChaos Feb 23 '15 at 15:39
  • @CodesInChaos Of course. There's also best time complexity and a million other metrics such as memory complexity. But, as I said before, the most common metric for comparing algorithms is worst-case time complexity. – Malt Feb 23 '15 at 15:42