173

I'm working at a project in one of the world's top 3 IT consulting firms, and was told by a DBA that company best practice's state stored procedures are not a "best practice". This is so contrary to everything I've learned.

Stored procedures give you code reuse, and encapsulation (two pillars of software development), security (you can grant/revoke permissions on an individual stored proc), protect you from SQL injection attacks, and also help with speed (although that DBA said that starting with SQL Server 2008 that even regular SQL queries are compiled if they are run enough times).

We're developing a complex app using Agile software development methodology. Can anyone think of good reasons why they wouldn't want to use stored procs? My guess was that the DBAs didn't want to maintain those stored procs, but there seem to be way too many negatives to justify such a design decision.

user22453
  • 477
  • 6
    What code reuse does it add? What if your client uses a different database. Have to trash all of those SPs and start from scratch. Don't protect you from sql injection. Speed is minimal in most cases. – Rig Apr 20 '12 at 15:07
  • What's the name of the firm? – Alexa May 08 '12 at 21:04
  • Highly related, from another point of view: http://dba.stackexchange.com/questions/2450/what-are-the-arguments-against-or-for-putting-application-logic-in-the-database – Florian Margaine Jun 13 '12 at 20:17
  • 48
    Bear in mind that most large I.T. consulting firms have a motive to maximise billable hours while keeping their arse covered. The old-timers with any clout in these firms also tend to be players and bureaucrats rather than techies. I'd take things like that from a consultancy firm with a grain of salt - I've gotten consultancy firms out of the shit on more than one occasion by fixing up their 'best' practice. – ConcernedOfTunbridgeWells Feb 17 '13 at 11:35
  • 2
    @Rig Code reuse is added just as it is for functions of any language - by wrapping code in a reusable container. Certainly stored procedures do in fact protect you from SQL injection so long as you don't execute a string you've built. To say speed is minimal seems simply uneducated. Most cases wont fall into the same categories on performance benefits but show a wide disparity. – That Realtor Programmer Guy Jun 01 '15 at 07:05
  • 1
    @GaretClaborn But it is much more likely to re architect the application layer than years and years of historical database data. On any non trivial application application anyway. And if you do you will spend months porting stored procedure rich code. There is little benefit to adding one more dependency to your project except in edgecase situations. Those do exist but the majority of the time its just adding one more hurdle to project agility and code reuse. – Rig Jun 01 '15 at 14:43
  • 2
    Coming from a background where we used sps almost exclusively I can tell you the benefit from moving away from them and using an ORM like Entity Framework. Far too many times business logic gets encapsulated within the procedure. While you can version procs with some work and or third party tools. It's not as easy as it would be to do so within a framework like TFS or GIT. Your database code that is emitted is agnostic of your RDBMS provider. So you can switch out RDBMS providers at a later date with less of a headache. – ewahner Nov 09 '16 at 19:32
  • PLSQL can't have unit tests, meaning you should never EVER put any code or procedure in PLSQL. Only store data there. Just because you can doesn't mean you should, it's like the "include" function in PHP... it's there, but don't use it. – Chris Vilches Sep 25 '18 at 15:37
  • 1
    That's strange, I've created unit tests for PL/SQL procedures. I guess it's a good thing I didnn't know it was impossible. :-) – Craig Tullis Nov 03 '18 at 06:42
  • @ewahner Hi I'm curious about "business logic encapsulated within the proc". We have an SP which consumes a set of filters (thinking devices, OS) and calculates CLV under their constraint. It's about 1,300 lines long, and I'm wondering if it's possible to use ORM to speed up? E.g. to calculate revenue (business logic) the SP will first insert all filters into a temp table (A) and then insert the transaction table into another temp table (B), LEFT JOIN B with A and remove all null records to apply all filters, then uses aggregation to get revenue/fee/etc. from different columns. It's slow. – Nicholas Humphrey Feb 23 '19 at 16:03
  • As expected a lot of DBAs here are up-in-arms about having less control of "their" databases and what DBAs can do in them. From a business logic point of view (since you'll use that phrase a lot here), it makes sense especially if the database is from a 3rd party provider. They "may" allow DBA access internally into the DB, but would frown upon DBAs writing their own SPs in there. In my company, we've shifted all our custom SPs over to a DW, where we pull the data in from the source and utilise it how we wish. After all, we own the data - not the software provider. – Fandango68 Sep 12 '21 at 23:09

21 Answers21

307

In my experience working on very large projects, you have to be very clear on where business logic lives. If you allow an environment where individual developers can put business logic in the business object layer or in a stored procedure as they see fit, a large application becomes VERY difficult to understand and maintain.

Stored procedures are great for speeding up certain DB operations. My architectural decision is to leave all logic in the business layer of the application and employ stored procedures in a targeted manner to improve performance where benchmarking indicates it is warranted.

Eric J.
  • 631
  • 2
    Well mentioned. I'm re-writing a product for a company at the moment and business logic has crept in to all the stored procedures. It's become extremely difficult to see what rules love in the UI, business tiers, as well as the stored procedures themselves. Not to also forget, that stored procedures give you vendor locking. – Martin Blore Apr 06 '11 at 22:52
  • If you don't use a proprietary db, vendor lock-in is not really that much of an issue, but it does indeed tie you to a solution. – user21007 Apr 06 '11 at 23:04
  • 41
    I don't see things quite that simply. To me, it's ALL business logic. The database, with or without stored procedures, provides certain services and makes certain guarantees. Ideally it should be impossible for incorrect application code to put the database into an inconsistent state. If stored procedures are needed to maintain that consistency, I use them. – kevin cline Apr 07 '11 at 02:38
  • 16
    @kevin cline: Define "inconsistent state". I agree that DB features such as referential integrity are valuable and greatly reduce the prospect of an application error causing serious damage. However, generally speaking, the definition of "consistent data" depends on correct execution of business rules. – Eric J. Apr 07 '11 at 03:46
  • 20
    add my million to Mayo's million. Distributed business logic takes you off of the highway of good practice, straight into the lane of lunacy – Nico Apr 07 '11 at 09:23
  • 28
    +1 Business logic seeping into the DAL is a great concern when using stored procedures. – System Down Apr 07 '11 at 15:59
  • 4
    There is no silver bullet to keeping business logic out of any layer. What about default values on fields, triggers, and other types of constraints? – JeffO Apr 07 '11 at 17:56
  • 4
    @ jeff database as a storage of data. does not enforce anything. logic layer enforces all. – Christopher Mahan Apr 07 '11 at 20:05
  • 33
    @ChristopherMahan, I would NEVER want to use a database you design. That is the worst possible practice from a database perspective. Databases are often affected directly at the database. It is short-sighted to think someone will use the business layer to update a million records or other things that happen over time. Imports do not typeically go through the business layer (yep I want to process my 21 million record import one record at a time in the business layer). Fraud is much easier when you don't have constraints at the database level. Bad data is almost 100% certain. – HLGEM Jan 06 '12 at 21:05
  • 10
    @HLGEM, when you process your 21 million records, you remember to do a select into, right, so it doesn't populate the transaction log. And try 900 million records (with 600 fields) and see if you can finish that night with all the triggers. At some point, you gotta trust the people who can do direct data manipulation. Also, if everyone is DBO, you have other issues. – Christopher Mahan Jan 06 '12 at 21:44
  • 1
    @JeffO: Drive UI validation from business rules defined in the business layer. I have done that for years. Now you can do that with IDataErrorInfo in the .Net Framework. Using EF Code First, metadata for the DB structure also derives from the business layer. – Eric J. Apr 26 '12 at 03:34
  • 1
    This should not be selected as answer... There are projects that people thing "all database calls must be Stored Procedures, and SQL is strictly verboten?" – confiq May 06 '12 at 16:11
  • 1
    This is such a narrow-minded programmer point of view. The first part is all right, the second isn't. http://dba.stackexchange.com/a/2452/9749 – Florian Margaine Feb 17 '13 at 11:09
  • 6
    Silly @ChristopherMahan, if you're really going to insert 21,000,000 records all at once, you're probably better off using your database server's bulk load facility and not using insert statements at all. – Craig Tullis May 05 '15 at 06:24
  • 2
    There ARE NO GUARANTEES that additional apps are going to dutifully use your business logic layer to update the database. That being the case, and it being just as true that sooner or later there WILL be additional apps using your database, you're smart to enable the database to maintain data integrity and consistency. That means foreign keys and check constraints. If it also means using stored procedures, then use stored procedures. SPROCS are powerful tools. I don't really get why so many younger programmers are apparently scared of databases. – Craig Tullis May 05 '15 at 06:28
  • 1
    I agree with @kevincline - the idea of distinct and clear-cut "business logic layer" and "data access layer" are archaic at best and dangerous at worst - software should just be viewed as interconnecting components that are abstracted and decoupled where necessary - you don't need "names" for "layers." Whether your "logic" goes in a sproc or in a codebase depends entirely on your goals and infrastructure. – Ant P Feb 17 '17 at 18:16
  • 3
    @Craig In that case, make the database its own "project" and put an api layer over it which encompasses the project. Having multiple apps connect to the same DB with different lifecycles, different requirements, different business logic is a recipe for disaster IMO – Joe Phillips Oct 31 '18 at 17:35
  • @JoePhillips I've been designing and building n-tier software systems for a long time. Of course you have a data layer/API in front of the database and your app accesses the database through that layer. If you're luck, you talk other departments into also using that API to access the database, but you have no guarantee of this. The database will also outlive your API--fact. If the database can protect itself, you stand a much better chance of maintaining the integrity of your data over time. – Craig Tullis Nov 01 '18 at 18:00
  • 2
    @Craig I think we just see the DB as being different things. I see two scenarios: (1) the DB is its own app and offers an API to gain access (2) the DB is a piece of a single app and that app has full ownership of the DB. You seem to saying there is scenario (3) where neither app really owns the DB and nobody really has full control of their app. Bad design IMO (but very common I'm certain) – Joe Phillips Nov 02 '18 at 19:12
  • 2
    @JoePhillips I'm saying that decades of real world experience bear out that the app is transitory, but the database lives on. Chances are that the database will live beyond the supported lifetime of whatever programming language or framework you used for your app (especially if you chose Ruby on Rails). Sooner or later, another system, and then another, will need access to the database. Technical debt will accrue, the app will be discarded and a new one will be developed. The data is the point. The C-suite and board don't care much about the app. The app and its API are just a means to an end. – Craig Tullis Nov 03 '18 at 02:09
  • all this "keeping the logic in one place because otherwise makes the app difficult to maintain" is just nonsense is nothing other than lacking of understanding of different architectures, fixation in multitier, which is a monolithic architecture, learn about the Actor Model & distributed architectures, etc. Thinking it makes the app difficult to understand, then the same would apply to any kind of API, hey dude having a backend makes it difficult to understand let's have all in javascript then, lol. – kisai Jun 11 '19 at 16:38
  • @EricJ. enforcing business logic rules is exactly what Kevin Cline refers to, the mechanism of the DB like primary keys, foreign keys, triggers, cascading etc help you to enforce business logic. This idea of consolidating business logic is archaic and monolithic oriented, it makes no sense at all, this multitier/n-tier view is wrong, what you really have is a distributed system, every actor in the system has to enforce business logic. The frontend should do so to help users, backend in different services SOA or uServices enforce in their single responsibility side of business. – kisai Jun 11 '19 at 16:53
  • 1
    @kisai Business logic only in the DB creates a dependence to that DB. I think we agree that's a non-starter. I've seen companies fail because they did this.

    The UI (GUI, website, API endpoint) can't be trusted, so while it's an end-user convenience to implement some rules in the UI (e.g. input validation), that can't be the only place they're implemented. I think we both agree on that.

    At a minimum, then, logic must be in the middle tier. Replicating some rules in the UI or DB can be beneficial but they must always be enforced in the middle.

    Good distributed systems are still n-Tier.

    – Eric J. Jun 17 '19 at 20:53
  • Imagine the disaster of having queries that need to be updated regularly or changed and having them in the code...basically have to release a new version every time you change a query and users would need to update it. – MattE Jun 28 '22 at 14:04
194

Some Observations

Stored procedures give you code reuse, and encapsulation (two pillars of software development),

Only if you use them correctly in the context in which they are supposed to be used. The same claim can be said about functions (in structured programming) or methods (in object oriented programming), and yet, we see 1K functions and mega-ass objects.

Artifacts don't give you those benefits. The proper usage of those artifacts is what give those benefits.

security (you can grant/revoke permissions on an individual stored proc),

Yes. This is a good point and one of the main reasons I like stored procedures. They provide a finer-granularity access control than what can be typically achieved with just views and user accounts.

protect you from SQL injection attacks,

That is not specific to SPs since you can achieve the same level of protection with parameterized SQL statements and input scrubbing. I would use SPs in addition to those, however, as matter of "security in depth".

and also help with speed (although that DBA said that starting with SQL Server 2008 that even regular SQL queries are compiled if they are run enough times).

This is highly database vendor specific, but in general your DBA is right. SQL statements (either static or parametrized) do get compiled. SPs help if you want/need to aggregate and compute data that you cannot do with simple SQL statements, but are tightly integrated with SQL and does not warrant the round-trip to the app server.

A good example is querying data into a temporary cursor (or cursors) from which to run another SQL itself. You can do it programmatically in the app server, or you can save the multiple round-trips by doing it in the db.

This should not be the norm, however. If you have many of those cases, then that is a sign of bad database design (or you are pulling data from not-so compatible database schemas across departments.)

We're developing a complex app using Agile software development methodology.

Agility has to do with software engineering processes and requirement managements, and not technologies.

Can anyone think of good reasons why they wouldn't want to use stored procs?

Wrong Question

The question is wrong and equivalent to asking "are there any good reasons not to use GOTO"? I side with Niklaus Wirth more than with Dijkstra on this subject. I can understand where Dijkstra's sentiment came from, but I do not believe it is 100% applicable in all cases. Same with store procs and any technology.

A tool is good when used well for its intended purpose, and when it is the best tool for the particular task. Using it otherwise is not an indication that the tool is wrong, but that the wielder doesn't know what he/she is doing.

The proper question is "what type of stored procedure usage patterns should be avoided." Or, "under what conditions should I (or should not) use stored procedures". Looking for reasons not to use a technology is simply putting the blame on the tool as opposed to placing the engineering responsibility squarely where it belongs - in the engineer.

In other words, it is a cop-out or a statement of ignorance.

My guess was that the DBAs didn't want to maintain those stored procs, but there seem to be way too many negatives to justify such a design decision.

What they are doing then is projecting the results of their bad engineering decisions on the tools they used poorly.

What to do in your case?

My experience is, when in Rome, do as the Romans do.

Don't fight it. If the people at your company want to label store procs as a bad practice, let them. Be advised however, that this can be a red flag in their engineering practices.

Typical labeling of things as bad practice is usually done in organizations with tons of incompetent programmers. By black-listing certain things, the organization tries to limit the damage inflicted internally by their own incompetence. I shit you not.

Generalizations are the mother of all screw ups. Saying that stored procs (or any type of technology) are a bad practice, that's a generalization. Generalizations are cop-outs for the incompetent. Engineers do not work with blatant generalizations. They do analysis on a case-by-case basis, do analysis trade-offs and execute engineering decisions and solutions according to the facts at hand, in the context in which they are supposed to solve a problem.

Good engineers do not label things as bad practice in such generalizing ways. They look at the problem, select the tool that are appropriate, make trade-offs. In other words, they do engineering.

My opinion on how not to use them

  • Don't put complex logic beyond data gathering (and perhaps some transformations) in them. It is ok to put some data massaging logic in them, or to aggregate the result of multiple queries with them. But that's about it. Anything beyond that would qualify as business logic which should reside somewhere else.

  • Don't use them as your sole mechanism of defense against SQL injection. You leave them there in case something bad makes it to them, but there should be a slew of defensive logic in front of them - client-side validation/scrubbing, server-side validation/scrubbing, possibly transformation into types that make sense in your domain model, and finally getting passed to parametrized statements (which could be parametrized SQL statements or parametrized stored procs.)

  • Don't make databases the only place containing your store procs. Your store procs should be treated just as you treat your C# or Java source code. That is, source control the textual definition of your store procs. People rant that store procs can't be source controlled - bullcrap, they just don't know what the bloody hell they are talking about.

My opinion in how/where to use them

  • Your application requires data that needs to be transposed or aggregated from multiple queries or views. You can offload that from the application into the db. Here you have to do a performance analysis since a) database engines are more efficient that app servers in doing these things, but b) app servers are (sometimes) easier to scale horizontally.

  • Fine grain access control. You do not want some idiot running cartesian joins in your db, but you cannot just forbid people from executing arbitrary SQL statements just like that either. A typical solution is to allow arbitrary SQL statements in development and UAT environments, while forbidding them in systest and production environments. Any statement that must make it to systest or production goes into a store procedure, code-reviewed by both developers and dbas.

Any valid need to run a SQL statement not in a store proc goes through a different username/account and connection pool (with the usage highly monitored and discouraged.)

  • In systems like Oracle, you can get access to LDAP, or create symlinks to external databases (say calling a store proc on a business partner's db via vpn.) Easy way to do spaghetti code, but that's true for all programming paradigms, and sometimes you have specific business/environment requirements for which this is the only solution. Store procs help encapsulate that nastiness in one place alone, close to the data and without having to traverse to the app server.

Whether you run this on the db as a store proc or on your app server depends on the trade-off analysis that you, as an engineer, have to make. Both options have to be analyzed and justified with some type of analysis. Going one way or another by simply accusing the other alternative as "bad practice", that's just a lame engineering cop-out.

  • In situations where you simply cannot scale up your app server (.ie. no budget for new hardware or cloud instances) but with plenty of capacity on the db back-end (this is more typical that many people care to admit), it pays to move business logic to store procs. Not pretty and can lead to anemic domain models... but then again... trade-off analysis, the thing most software hacks suck at.

Whether that becomes a permanent solution or not, that's specific to constrains observed at that particular moment.

Hope it helps.

luis.espinal
  • 2,590
  • 19
    This is a really good answer. – yfeldblum Apr 07 '11 at 23:52
  • 8
    Good answer, but was this intended to be ironic? "Generalizations are the mother of all screw ups." – bedwyr Feb 18 '13 at 03:28
  • 2
    Yep and nay. That comment of mine was intended for this particular sentence referred by the OP in his original question (stored procedures are not a "best practice".) A coarse description of store procedures as best or bad practice is a generalization. Ignoring the context in which they can be good OR bad can (and will often lead) to screw ups when architecting or designing solutions ;) – luis.espinal Feb 19 '13 at 16:24
  • 8
    +1 for "Typical labeling of things as bad practice is usually done in organizations with tons of incompetent programmers." - been there, lived through that, including being told to my face by a dev manager that he thought I had a great solution for one tricky problem, but if he was seen to allow me to implement it then it would open the floodgates for the muppets. – Julia Hayward Sep 04 '14 at 15:20
  • A little long winded, though a well though out answer. I hope the when In rome do as Romans do is taken to heart. – Robert Baron Feb 19 '17 at 05:13
  • Answer would be improved without the snark. Or if you knew what a 'best practice' is. A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things This entire answer describes how Stored Procedures are not a best practice. You have to think about their pros and cons and use them accordingly. But then you take a dump on other people for saying the same thing. – Shane Apr 16 '18 at 21:17
  • 1
    @Shane You are right. However I believe what this answer is trying to convey is the tendency of some groups of engineers to excuse their lack of knowledge or analysis by calling on the bad practice card. The answer could see some improvement for the more inexperienced of us, though. – Cesar Hernandez Sep 19 '18 at 19:32
59

The rationale is that relying on a stored procedure layer limits portability and ties you to a certain DB. Added maintenance costs are also cited as a reason. I also wanted to comment on this point you made:

(stored procedures) protect you from SQL injection attacks

It's actually the parametrized querying that protects you, which you can easily do in plain text sql querying.

user
  • 2,200
System Down
  • 4,753
  • 21
    And if your stored proc is using any type of dynamic sql along with a string parameter, you're right back where you started. – JeffO Apr 07 '11 at 17:52
  • 4
    The difference is that access permission can be set for stored procedures on per procedure basis, for parameterised SQL queries you have to rely on programmers sanity not to do + "blablabla" because you have to allow plain SQL, and that's where control ends. – Coder Apr 20 '12 at 03:58
  • 23
    I've never understood the "ties you to a certain DB" argument. How often do you take your program and migrate it to an entirely different database? – Mason Wheeler Jun 14 '12 at 18:07
  • 12
    @MasonWheeler - +1 every time. In any sufficiently large project, your app ends up being written against the foibles of a given DB product. Converting to another DB becomes a major job no matter what because the new DB will have different oddities! – Michael Kohne Jun 14 '12 at 18:09
  • 2
    @MichaelKohne, I would say never in my experience. It depends on what segment of the industry you are in. In Enterpise systems it is rare to change database vendors. In the COTS world, it happens all the time or the design must account for multiple vendor possibilities. In the web world is somehwat divided depending on what the web site actually does and how critical the db is to the success of the site. – HLGEM Jun 05 '13 at 17:45
  • 6
    @HLGEM - but in the COTS world, multiple DBs are EXPECTED at the outset (in fact, you choose the compatible DBs). It's not that you port, it's that you support different back-ends, which is a completely different beast than doing a port. – Michael Kohne Jun 05 '13 at 18:34
52

Some of the reasons that I agree stored procs aren't a best practice.

  • Business and application logic should be in the code not in the database. Putting logic in the DB is mixing up concerns.
  • You can't test stored procs as seamlessly as code in your conventional unit test projects with the rest of the application logic.
  • I don't find stored procs as being conducive to test first programming when I am writing code.
  • Stored procs aren't as easy to debug as application code when you are debugging your program in your IDE.
  • Versionning control / Source control of SP vs. normal code
Gilles
  • 2,201
  • 10
    You can just as easy do test-first programming on stored procedures. –  Apr 07 '11 at 06:57
  • 5
    Hmmm, well... 1) The usage of db stored procedures does not necessarily imply that business logic is being put in them. 2) stored procs are some of the easiest things to unit test. 3) store procs are not necessarily conductive of test-first practices, true, but not everything that is computable can be test-first'ed. 4) debugging shouldn't be an issue since store procs should contain nothing more than easy-to-verify SQL statements and cursors. Also, debugging should take place by first testing and debugging the SQL statements in code, and then moved into store procs... just IMO btw. – luis.espinal Apr 07 '11 at 16:30
  • 8
    You're obviously not a DB dev. Source control, IDEs - its damn easy to debug a SP if you're using TOAD or a similar IDE, same with versioning. – gbjbaanb Jul 21 '11 at 11:02
  • I recommend utplsql for unit testing PL/SQL packages. – Scott A Jul 22 '11 at 01:41
  • 8
  • on unit testing stored procs. idk about other unit test frameworks but at least with MS Test (VisualStudio.TestTools.UnitTesting), running any of the Assert methods on the stored proc at least requires a Db connection, which by definition makes it more of an integration test than a unit test. And a stored proc may reference state about the database at a global, database level. These may not be fake-able or have interfaces.
  • – T. Webster Jan 07 '12 at 07:50
  • 4
    +1 In addition, stored procedure languages (pl/sql,t-sql,plpgsql, etc) are very clunky and verbose. It's much easier for me to use a scripting language to make a database connection and handle the business logic outside of the database. –  Jun 13 '12 at 19:31
  • I know its an old question, but wouldn't taking a service based approach solve these issues while keeping the benefits of stored procedures? – Jonathan DS Jan 15 '20 at 18:21
  • Complex Business logic must always be in the database because business data must be processed using SQL not any other language. The mistake the application developer always is to write complex logic in cursor (row by row processing) which makes the application less scalable. – Sabyasachi Mitra Jun 19 '20 at 15:54