Can you explain to me how you get the Big O notation for the runtime of the following snippet of code?
for x = 1 to n
{
y = 1
while y < n
y = y + y
}
Can you explain the steps on how you get the big O notation for it? Also why doesn't it matter if there a bunch of statements inside a for loop when calculating the big O notation? Wouldn't the number of operations increase as n gets larger if there are more O(1) statements inside the for loop? Help would greatly be appreciated.