A real, square matrix $B$ is positive definite iff $v^TBv> 0$ for all $v\neq 0$. But
$$v^TBv = \tfrac{1}{2}(v^TBv+v^TB^Tv) = \tfrac{1}{2}v^T(B+B^T)v.$$
It follows then that $B$ is positive definite iff $B+B^T$ is positive definite. Therefore, your model becomes
$$\begin{array}{ll}
\text{minimize} & \|A-B\|_F \\
\text{subject to} & B+B^T \succ 0
\end{array}$$
This remains a convex optimization problem. For example, in CVX the model is
n = size(A,1);
cvx_begin sdp
variable B(n,n)
minimize(norm(A-B,'Fro'))
subject to
B+B' >= 0
cvx_end
(Disclaimer: I am the author of CVX. You do not need to use it to solve this problem, however. Any SDP solver can handle this problem.)
Note that the CVX model relaxes the condition to require $B$ to be positive semidefinite. That will be necessary with any numerical solver you are likely to employ here. Of course, an interior-point method would get you a sequence of strictly positive definite solutions that converge to an optimum, but this optimum may itself be positive semidefinite.