Was wondering if someone could give a practical understanding of what it means for an algorithm to perform in (log a / log b) time?
In other words, practically speaking for an algorithm to perform in nlogn time it would need to split a max of log n times and perform n amount of work at each level. For log n performance it would split a max of log n times and perform at most a constant amount of work at each level, etc.
What would be the equivalent way of explaining the performance of a log a / log b algorithm?
Thanks
a
andb
which executes in time proportional to the graph oflog a / log b
. It would take about a time unit whena
andb
are similarly sized. Whenb
is much larger thana
then it will execute faster, whena
is much larger thanb
it will execute slower. the scale is logarithmic so as the values ofa
andb
get larger total execution time becomes more stable, simply adding 1 to either will have little effect. however whena
andb
are smaller adding 1 will be much more noticeable. – Kain0_0 Feb 24 '20 at 01:42log a / log b
equalslog a base b
. When we talk about logaritmic complexities, we usualy mean log base 2 which comes from binary nature of computer data representations. Now imagine merge sort algo which instead of bisecting, trisects. Its complexity would be nlog n base 3 which Is nlog n / log 3. – slepic Feb 24 '20 at 06:33log n
but instead of the word "Split" use word "bisect". And Now change the word to "b-sect" And you got yourself a description of log n / log b. – slepic Feb 24 '20 at 07:01