You should design your function to meet specifications.
Practical programming is not the same as pure math. It has different goals and different methods and different standards than the mathematical field does. Trying to argue with your professor about this means you're arguing specifications, not mathematical principles. As a developer, I applaud you for thinking of this edge case and asking about it. That's what a developer should do: clarify the specifications with the client. And in this case, it's something mathematicians do, too. As you see in the comments, they'll only talk about divisibility by 0 if you explain what mathematical framework (or equivalently, what axioms) you're working under. Context is everything in both fields.
Discussions about divisibility are usually motivated by needing to actually divide one thing by another. As a result, it's important to realize that a programming language or specification can define the arithmetic to be whatever is deemed useful. For integer arithmetic, $\frac{0}{0}$ and $0 \mod 0$ is usually defined to be an error. For floating point math, the languages I tested considered $\frac{0.0}{0.0}$ and $0.0 \mod 0.0$ to be either NaN
or an error. In this mathematical framework, dividing by zero is meaningless, so it makes sense to say that divisibility by zero is false, even if we have to specifically define divisibility so that's the case.
Even in the normal arithmetic framework over the real numbers, though, dividing by zero is a weird edge case. In many contexts, it's simply considered an undefined quantity, particularly since it's not of interest to many of the fields you'll cover in your early education. (That's what they told you in high school algebra, right?) In these frameworks, we can make everything simpler and more intuitive by defining that nothing is divisible by zero. There may be exotic theories that give $\frac{0}{0}$ or $\frac{a}{0}$ some value, and in these frameworks, it might make sense to use a different definition of divisibility. But you're unlikely to encounter any situations in which these are useful if you're pursuing a career in software development (unless you're building software for mathematicians).
If you want to know why programming languages define their arithmetic this way, they do so because it's a useful definition for the vast majority of numerical calculations in software. Most software never intends to divide anything by zero, and trying to compute further results after encountering it will result in utter nonsense for a final answer. As a result, it's almost always better for your program to give an obviously nonsense response as early as possible, so you can discover you have a bug in your code, rather than wonder why you're getting $0$ in this one weird case where you assumptions failed you.
I do realize this is not really a mathematical answer. But I think it is the answer to the OP's question. If you feel this doesn't belong on this SE, I encourage you to close the question, perhaps with migration to another SE.