Should you expect unexpected values from external APIs?












14















Lets say you are coding a function that takes input from an external API MyAPI.



That external API MyAPI has a contract that states it will return a string or a number.



Is it recommended to guard against things like null, undefined, boolean, etc. even though it's not part of the API of MyAPI? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry?



I'm thinking in relation to the Robustness Principle.










share|improve this question


















  • 4





    What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

    – Vincent Savard
    10 hours ago













  • @VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

    – Adam Thompson
    9 hours ago






  • 12





    If you're expecting them, then by definition they're not unexpected.

    – Mason Wheeler
    9 hours ago











  • Possible duplicate of Differences between Design by Contract and Defensive Programming

    – gnat
    8 hours ago






  • 1





    What does "external API" mean? Is it still under your Control?

    – Deduplicator
    6 hours ago
















14















Lets say you are coding a function that takes input from an external API MyAPI.



That external API MyAPI has a contract that states it will return a string or a number.



Is it recommended to guard against things like null, undefined, boolean, etc. even though it's not part of the API of MyAPI? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry?



I'm thinking in relation to the Robustness Principle.










share|improve this question


















  • 4





    What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

    – Vincent Savard
    10 hours ago













  • @VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

    – Adam Thompson
    9 hours ago






  • 12





    If you're expecting them, then by definition they're not unexpected.

    – Mason Wheeler
    9 hours ago











  • Possible duplicate of Differences between Design by Contract and Defensive Programming

    – gnat
    8 hours ago






  • 1





    What does "external API" mean? Is it still under your Control?

    – Deduplicator
    6 hours ago














14












14








14


1






Lets say you are coding a function that takes input from an external API MyAPI.



That external API MyAPI has a contract that states it will return a string or a number.



Is it recommended to guard against things like null, undefined, boolean, etc. even though it's not part of the API of MyAPI? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry?



I'm thinking in relation to the Robustness Principle.










share|improve this question














Lets say you are coding a function that takes input from an external API MyAPI.



That external API MyAPI has a contract that states it will return a string or a number.



Is it recommended to guard against things like null, undefined, boolean, etc. even though it's not part of the API of MyAPI? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry?



I'm thinking in relation to the Robustness Principle.







design api api-design web-services functions






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 10 hours ago









Adam ThompsonAdam Thompson

28019




28019








  • 4





    What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

    – Vincent Savard
    10 hours ago













  • @VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

    – Adam Thompson
    9 hours ago






  • 12





    If you're expecting them, then by definition they're not unexpected.

    – Mason Wheeler
    9 hours ago











  • Possible duplicate of Differences between Design by Contract and Defensive Programming

    – gnat
    8 hours ago






  • 1





    What does "external API" mean? Is it still under your Control?

    – Deduplicator
    6 hours ago














  • 4





    What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

    – Vincent Savard
    10 hours ago













  • @VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

    – Adam Thompson
    9 hours ago






  • 12





    If you're expecting them, then by definition they're not unexpected.

    – Mason Wheeler
    9 hours ago











  • Possible duplicate of Differences between Design by Contract and Defensive Programming

    – gnat
    8 hours ago






  • 1





    What does "external API" mean? Is it still under your Control?

    – Deduplicator
    6 hours ago








4




4





What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

– Vincent Savard
10 hours ago







What are the impacts of not handling those unexpected values if they are returned? Can you live with these impacts? Is it worth the complexity to handle those unexpected values to prevent having to deal with the impacts?

– Vincent Savard
10 hours ago















@VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

– Adam Thompson
9 hours ago





@VincentSavard I know I won't get an absolute answer without this data, but I'm looking for an in general answer.

– Adam Thompson
9 hours ago




12




12





If you're expecting them, then by definition they're not unexpected.

– Mason Wheeler
9 hours ago





If you're expecting them, then by definition they're not unexpected.

– Mason Wheeler
9 hours ago













Possible duplicate of Differences between Design by Contract and Defensive Programming

– gnat
8 hours ago





Possible duplicate of Differences between Design by Contract and Defensive Programming

– gnat
8 hours ago




1




1





What does "external API" mean? Is it still under your Control?

– Deduplicator
6 hours ago





What does "external API" mean? Is it still under your Control?

– Deduplicator
6 hours ago










6 Answers
6






active

oldest

votes


















26














You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well.



Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state.






share|improve this answer



















  • 5





    What is q.v. stand for ?

    – JonH
    7 hours ago






  • 3





    @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

    – andrewtweber
    6 hours ago











  • This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

    – leftaroundabout
    4 hours ago













  • ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

    – leftaroundabout
    4 hours ago













  • @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

    – Paul
    2 hours ago



















12














Yes, of course. But what makes you think the answer could be different?



You surely don't want to let your program behave in some unpredictable manner in case the API does not return what the contract says, don't you? So at least you have to deal with such a behaviour somehow. A minimal form of error handling is always worth the (very minimal!) effort, and there is absolutely no excuse for not implementing something like this.



However, how much effort you should invest to deal with such a case is heavily case dependent and can only be answered in context of your system. Often, a short log entry and letting the application end gracefully can be enough. Sometimes, you will be better off to implement some detailed exception handling, dealing with different forms of "wrong" return values, and maybe have to implement some fallback strategy.



But it makes a hell of a difference if you are writing just some inhouse spreadsheet formatting application, to be used by less than 10 people and where the financial impact of an application crash is quite low, or if you are creating a new autonomous car driving system, where an application crash may cost lives.



So there is no shortcut against reflecting about what you are doing, using your common sense is always mandatory.






share|improve this answer


























  • What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

    – mckenzm
    2 hours ago





















8














The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware,
where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.



Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.



This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.



This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.






share|improve this answer



















  • 1





    While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

    – Morgen
    7 hours ago






  • 3





    @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

    – supercat
    6 hours ago






  • 2





    @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

    – Doc Brown
    6 hours ago








  • 3





    @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

    – supercat
    6 hours ago






  • 2





    @supercat Exactly. JavaScript immediately comes to mind, for example...

    – Mason Wheeler
    6 hours ago



















2














Let's compare the two scenarios and try to come to a conclusion.



Scenario 1
Our application assumes the external API will behave as per the agreement.



Scenario 2
Our application assumes the external API can misbehave, hence add precautions.



In general, there is a chance for any API or software to violate the agreements; may be due to a bug or unexpected conditions. Even an API might be having issues in the internal systems resulting in unexpected results.



If our program is written assuming the external API will adhere to the agreements and avoid adding any precautions; who will be the party facing the issues? It will be us, the ones who has written integration code.



For example, the null values that you have picked. Say, as per the API agreement the response should have not-null values; but if it is suddenly violated our program will result in NPEs.



So, I believe it will be better to make sure your application has some additional code to address unexpected scenarios.






share|improve this answer








New contributor




lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




























    2














    In general, code should be constructed to uphold the at least the following constraints whenever practical:




    1. When given correct input, produce correct output.


    2. When given valid input (that may or may not be correct), produce valid output (likewise).


    3. When given invalid input, process it without any side-effects beyond those caused by normal input or those which are defined as signalling an error.



    In many situations, programs will essentially pass through various chunks of data without particularly caring about whether they are valid. If such chunks happen to contain invalid data, the program's output would likely contain invalid data as a consequence. Unless a program is specifically designed to validate all data, and guarantee that it will not produce invalid output even when given invalid input, programs that process its output should allow for the possibility of invalid data within it.



    While validating data early on is often desirable, it's not always particularly practical. Among other things, if the validity of one chunk of data depends upon the contents of other chunks, and if the majority of of the data fed into some sequence of steps will get filtered out along the way, limiting validation to data which makes it through all stages may yield much better performance than trying to validate everything.



    Further, even if a program is only expected to be given pre-validated data, it's often good to have it uphold the above constraints anyway whenever practical. Repeating full validation at every processing step would often be a major performance drain, but the limited amount of validation needed to uphold the above constraints may be much cheaper.






    share|improve this answer































      0














      You should always validate incoming data -- user-entered or otherwise -- so you should have a process in place to handle when the data retrieved from this external API is invalid.



      Generally speaking, any seam where extra-orgranizational systems meet should require authentication, authorization (if not defined simply by authentication), and validation.






      share|improve this answer























        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "131"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: false,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f385497%2fshould-you-expect-unexpected-values-from-external-apis%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown




















        StackExchange.ready(function () {
        $("#show-editor-button input, #show-editor-button button").click(function () {
        var showEditor = function() {
        $("#show-editor-button").hide();
        $("#post-form").removeClass("dno");
        StackExchange.editor.finallyInit();
        };

        var useFancy = $(this).data('confirm-use-fancy');
        if(useFancy == 'True') {
        var popupTitle = $(this).data('confirm-fancy-title');
        var popupBody = $(this).data('confirm-fancy-body');
        var popupAccept = $(this).data('confirm-fancy-accept-button');

        $(this).loadPopup({
        url: '/post/self-answer-popup',
        loaded: function(popup) {
        var pTitle = $(popup).find('h2');
        var pBody = $(popup).find('.popup-body');
        var pSubmit = $(popup).find('.popup-submit');

        pTitle.text(popupTitle);
        pBody.html(popupBody);
        pSubmit.val(popupAccept).click(showEditor);
        }
        })
        } else{
        var confirmText = $(this).data('confirm-text');
        if (confirmText ? confirm(confirmText) : true) {
        showEditor();
        }
        }
        });
        });






        6 Answers
        6






        active

        oldest

        votes








        6 Answers
        6






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        26














        You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well.



        Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state.






        share|improve this answer



















        • 5





          What is q.v. stand for ?

          – JonH
          7 hours ago






        • 3





          @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

          – andrewtweber
          6 hours ago











        • This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

          – leftaroundabout
          4 hours ago













        • ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

          – leftaroundabout
          4 hours ago













        • @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

          – Paul
          2 hours ago
















        26














        You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well.



        Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state.






        share|improve this answer



















        • 5





          What is q.v. stand for ?

          – JonH
          7 hours ago






        • 3





          @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

          – andrewtweber
          6 hours ago











        • This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

          – leftaroundabout
          4 hours ago













        • ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

          – leftaroundabout
          4 hours ago













        • @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

          – Paul
          2 hours ago














        26












        26








        26







        You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well.



        Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state.






        share|improve this answer













        You should never trust the inputs to your software, regardless of source. Not only validating the types is important, but also ranges of input and the business logic as well.



        Failing to do so will at best leave you with garbage data that you have to later clean up, but at worst you'll leave an opportunity for malicious exploits if that upstream service gets compromised in some fashion (q.v. the Target hack). The range of problems in between includes getting your application in an unrecoverable state.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 10 hours ago









        PaulPaul

        2,4551015




        2,4551015








        • 5





          What is q.v. stand for ?

          – JonH
          7 hours ago






        • 3





          @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

          – andrewtweber
          6 hours ago











        • This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

          – leftaroundabout
          4 hours ago













        • ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

          – leftaroundabout
          4 hours ago













        • @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

          – Paul
          2 hours ago














        • 5





          What is q.v. stand for ?

          – JonH
          7 hours ago






        • 3





          @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

          – andrewtweber
          6 hours ago











        • This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

          – leftaroundabout
          4 hours ago













        • ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

          – leftaroundabout
          4 hours ago













        • @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

          – Paul
          2 hours ago








        5




        5





        What is q.v. stand for ?

        – JonH
        7 hours ago





        What is q.v. stand for ?

        – JonH
        7 hours ago




        3




        3





        @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

        – andrewtweber
        6 hours ago





        @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v.

        – andrewtweber
        6 hours ago













        This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

        – leftaroundabout
        4 hours ago







        This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle.

        – leftaroundabout
        4 hours ago















        ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

        – leftaroundabout
        4 hours ago







        ...That's not to say there aren't in practice often reasons to mistrust certain third-party functions, but distrusting everything doesn't get you anywhere. Following that logic, you must write everything yourself including the operating system. I think we can agree that this would not give a better end product... — What I'd argue is the best approach to this kind of things is to choose tooling that gets as close as possible for library providers to ensure contracts: strong, static type systems, automated property-based unit testing etc..

        – leftaroundabout
        4 hours ago















        @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

        – Paul
        2 hours ago





        @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest.

        – Paul
        2 hours ago













        12














        Yes, of course. But what makes you think the answer could be different?



        You surely don't want to let your program behave in some unpredictable manner in case the API does not return what the contract says, don't you? So at least you have to deal with such a behaviour somehow. A minimal form of error handling is always worth the (very minimal!) effort, and there is absolutely no excuse for not implementing something like this.



        However, how much effort you should invest to deal with such a case is heavily case dependent and can only be answered in context of your system. Often, a short log entry and letting the application end gracefully can be enough. Sometimes, you will be better off to implement some detailed exception handling, dealing with different forms of "wrong" return values, and maybe have to implement some fallback strategy.



        But it makes a hell of a difference if you are writing just some inhouse spreadsheet formatting application, to be used by less than 10 people and where the financial impact of an application crash is quite low, or if you are creating a new autonomous car driving system, where an application crash may cost lives.



        So there is no shortcut against reflecting about what you are doing, using your common sense is always mandatory.






        share|improve this answer


























        • What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

          – mckenzm
          2 hours ago


















        12














        Yes, of course. But what makes you think the answer could be different?



        You surely don't want to let your program behave in some unpredictable manner in case the API does not return what the contract says, don't you? So at least you have to deal with such a behaviour somehow. A minimal form of error handling is always worth the (very minimal!) effort, and there is absolutely no excuse for not implementing something like this.



        However, how much effort you should invest to deal with such a case is heavily case dependent and can only be answered in context of your system. Often, a short log entry and letting the application end gracefully can be enough. Sometimes, you will be better off to implement some detailed exception handling, dealing with different forms of "wrong" return values, and maybe have to implement some fallback strategy.



        But it makes a hell of a difference if you are writing just some inhouse spreadsheet formatting application, to be used by less than 10 people and where the financial impact of an application crash is quite low, or if you are creating a new autonomous car driving system, where an application crash may cost lives.



        So there is no shortcut against reflecting about what you are doing, using your common sense is always mandatory.






        share|improve this answer


























        • What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

          – mckenzm
          2 hours ago
















        12












        12








        12







        Yes, of course. But what makes you think the answer could be different?



        You surely don't want to let your program behave in some unpredictable manner in case the API does not return what the contract says, don't you? So at least you have to deal with such a behaviour somehow. A minimal form of error handling is always worth the (very minimal!) effort, and there is absolutely no excuse for not implementing something like this.



        However, how much effort you should invest to deal with such a case is heavily case dependent and can only be answered in context of your system. Often, a short log entry and letting the application end gracefully can be enough. Sometimes, you will be better off to implement some detailed exception handling, dealing with different forms of "wrong" return values, and maybe have to implement some fallback strategy.



        But it makes a hell of a difference if you are writing just some inhouse spreadsheet formatting application, to be used by less than 10 people and where the financial impact of an application crash is quite low, or if you are creating a new autonomous car driving system, where an application crash may cost lives.



        So there is no shortcut against reflecting about what you are doing, using your common sense is always mandatory.






        share|improve this answer















        Yes, of course. But what makes you think the answer could be different?



        You surely don't want to let your program behave in some unpredictable manner in case the API does not return what the contract says, don't you? So at least you have to deal with such a behaviour somehow. A minimal form of error handling is always worth the (very minimal!) effort, and there is absolutely no excuse for not implementing something like this.



        However, how much effort you should invest to deal with such a case is heavily case dependent and can only be answered in context of your system. Often, a short log entry and letting the application end gracefully can be enough. Sometimes, you will be better off to implement some detailed exception handling, dealing with different forms of "wrong" return values, and maybe have to implement some fallback strategy.



        But it makes a hell of a difference if you are writing just some inhouse spreadsheet formatting application, to be used by less than 10 people and where the financial impact of an application crash is quite low, or if you are creating a new autonomous car driving system, where an application crash may cost lives.



        So there is no shortcut against reflecting about what you are doing, using your common sense is always mandatory.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 9 hours ago

























        answered 9 hours ago









        Doc BrownDoc Brown

        131k23240380




        131k23240380













        • What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

          – mckenzm
          2 hours ago





















        • What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

          – mckenzm
          2 hours ago



















        What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

        – mckenzm
        2 hours ago







        What to do is another decision. You may have a fail over solution. Anything asynchronous could be retried before creating an exception log (or dead letter). An active alert to the vendor or provider may be an option if the issue persists.

        – mckenzm
        2 hours ago













        8














        The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware,
        where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.



        Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.



        This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.



        This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.






        share|improve this answer



















        • 1





          While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

          – Morgen
          7 hours ago






        • 3





          @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

          – supercat
          6 hours ago






        • 2





          @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

          – Doc Brown
          6 hours ago








        • 3





          @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

          – supercat
          6 hours ago






        • 2





          @supercat Exactly. JavaScript immediately comes to mind, for example...

          – Mason Wheeler
          6 hours ago
















        8














        The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware,
        where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.



        Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.



        This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.



        This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.






        share|improve this answer



















        • 1





          While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

          – Morgen
          7 hours ago






        • 3





          @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

          – supercat
          6 hours ago






        • 2





          @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

          – Doc Brown
          6 hours ago








        • 3





          @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

          – supercat
          6 hours ago






        • 2





          @supercat Exactly. JavaScript immediately comes to mind, for example...

          – Mason Wheeler
          6 hours ago














        8












        8








        8







        The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware,
        where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.



        Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.



        This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.



        This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.






        share|improve this answer













        The Robustness Principle--specifically, the "be liberal in what you accept" half of it--is a very bad idea in software. It was originally developed in the context of hardware,
        where physical constraints make engineering tolerances very important, but in software, when someone sends you malformed or otherwise improper input, you have two choices. You can either reject it, (preferably with an explanation as to what went wrong,) or you can try to figure out what it was supposed to mean.



        Never, never, never choose that second option unless you have resources equivalent to Google's Search team to throw at your project, because that's what it takes to come up with a computer program that does anything close to a decent job at that particular problem domain. (And even then, Google's suggestions feel like they're coming straight out of left field about half the time.) If you try to do so, what you'll end up with is a massive headache where your program will frequently try to interpret bad input as X, when what the sender really meant was Y.



        This is bad for two reasons. The obvious one is because then you have bad data in your system. The less obvious one is that in many cases, neither you nor the sender will realize that anything went wrong until much later down the road when something blows up in your face, and then suddenly you have a big, expensive mess to fix and no idea what went wrong because the noticeable effect is so far removed from the root cause.



        This is why the Fail Fast principle exists; save everyone involved the headache by applying it to your APIs.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 8 hours ago









        Mason WheelerMason Wheeler

        74.4k18213297




        74.4k18213297








        • 1





          While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

          – Morgen
          7 hours ago






        • 3





          @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

          – supercat
          6 hours ago






        • 2





          @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

          – Doc Brown
          6 hours ago








        • 3





          @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

          – supercat
          6 hours ago






        • 2





          @supercat Exactly. JavaScript immediately comes to mind, for example...

          – Mason Wheeler
          6 hours ago














        • 1





          While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

          – Morgen
          7 hours ago






        • 3





          @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

          – supercat
          6 hours ago






        • 2





          @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

          – Doc Brown
          6 hours ago








        • 3





          @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

          – supercat
          6 hours ago






        • 2





          @supercat Exactly. JavaScript immediately comes to mind, for example...

          – Mason Wheeler
          6 hours ago








        1




        1





        While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

        – Morgen
        7 hours ago





        While I agree with the principle of what you're saying, I think you're mistaken WRT the intent of the Robustness Principle. I've never seen it intended to mean, "accept bad data", only, "don't be excessively fiddly about good data". For example, if the input is a CSV file, the Robustness Principle wouldn't be a valid argument for trying to parse out dates in an unexpected format, but would support an argument that inferring colum order from a header row would be a good idea.

        – Morgen
        7 hours ago




        3




        3





        @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

        – supercat
        6 hours ago





        @Morgen: The robustness principle was used to suggest that browsers should accept rather sloppy HTML, and led to deployed web sites being much sloppier than they would have been if browsers had demanded proper HTML. A big part of the problem there, though, was the use of a common format for human-generated and machine-generated content, as opposed to the use of separate human-editable and machine-parsable formats along with utilities to convert between them.

        – supercat
        6 hours ago




        2




        2





        @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

        – Doc Brown
        6 hours ago







        @supercat: nevertheless - or just hence - HTML and the WWW was extremely successful ;-)

        – Doc Brown
        6 hours ago






        3




        3





        @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

        – supercat
        6 hours ago





        @DocBrown: A lot of really horrible things have become standards simply because they were the first approach that happened to be available when someone with a lot of clout needed to adopt something that met certain minimal criteria, and by the time they gained traction it was too late to select something better.

        – supercat
        6 hours ago




        2




        2





        @supercat Exactly. JavaScript immediately comes to mind, for example...

        – Mason Wheeler
        6 hours ago





        @supercat Exactly. JavaScript immediately comes to mind, for example...

        – Mason Wheeler
        6 hours ago











        2














        Let's compare the two scenarios and try to come to a conclusion.



        Scenario 1
        Our application assumes the external API will behave as per the agreement.



        Scenario 2
        Our application assumes the external API can misbehave, hence add precautions.



        In general, there is a chance for any API or software to violate the agreements; may be due to a bug or unexpected conditions. Even an API might be having issues in the internal systems resulting in unexpected results.



        If our program is written assuming the external API will adhere to the agreements and avoid adding any precautions; who will be the party facing the issues? It will be us, the ones who has written integration code.



        For example, the null values that you have picked. Say, as per the API agreement the response should have not-null values; but if it is suddenly violated our program will result in NPEs.



        So, I believe it will be better to make sure your application has some additional code to address unexpected scenarios.






        share|improve this answer








        New contributor




        lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.

























          2














          Let's compare the two scenarios and try to come to a conclusion.



          Scenario 1
          Our application assumes the external API will behave as per the agreement.



          Scenario 2
          Our application assumes the external API can misbehave, hence add precautions.



          In general, there is a chance for any API or software to violate the agreements; may be due to a bug or unexpected conditions. Even an API might be having issues in the internal systems resulting in unexpected results.



          If our program is written assuming the external API will adhere to the agreements and avoid adding any precautions; who will be the party facing the issues? It will be us, the ones who has written integration code.



          For example, the null values that you have picked. Say, as per the API agreement the response should have not-null values; but if it is suddenly violated our program will result in NPEs.



          So, I believe it will be better to make sure your application has some additional code to address unexpected scenarios.






          share|improve this answer








          New contributor




          lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.























            2












            2








            2







            Let's compare the two scenarios and try to come to a conclusion.



            Scenario 1
            Our application assumes the external API will behave as per the agreement.



            Scenario 2
            Our application assumes the external API can misbehave, hence add precautions.



            In general, there is a chance for any API or software to violate the agreements; may be due to a bug or unexpected conditions. Even an API might be having issues in the internal systems resulting in unexpected results.



            If our program is written assuming the external API will adhere to the agreements and avoid adding any precautions; who will be the party facing the issues? It will be us, the ones who has written integration code.



            For example, the null values that you have picked. Say, as per the API agreement the response should have not-null values; but if it is suddenly violated our program will result in NPEs.



            So, I believe it will be better to make sure your application has some additional code to address unexpected scenarios.






            share|improve this answer








            New contributor




            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.










            Let's compare the two scenarios and try to come to a conclusion.



            Scenario 1
            Our application assumes the external API will behave as per the agreement.



            Scenario 2
            Our application assumes the external API can misbehave, hence add precautions.



            In general, there is a chance for any API or software to violate the agreements; may be due to a bug or unexpected conditions. Even an API might be having issues in the internal systems resulting in unexpected results.



            If our program is written assuming the external API will adhere to the agreements and avoid adding any precautions; who will be the party facing the issues? It will be us, the ones who has written integration code.



            For example, the null values that you have picked. Say, as per the API agreement the response should have not-null values; but if it is suddenly violated our program will result in NPEs.



            So, I believe it will be better to make sure your application has some additional code to address unexpected scenarios.







            share|improve this answer








            New contributor




            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            share|improve this answer



            share|improve this answer






            New contributor




            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered 10 hours ago









            lkamallkamal

            1413




            1413




            New contributor




            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            lkamal is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.























                2














                In general, code should be constructed to uphold the at least the following constraints whenever practical:




                1. When given correct input, produce correct output.


                2. When given valid input (that may or may not be correct), produce valid output (likewise).


                3. When given invalid input, process it without any side-effects beyond those caused by normal input or those which are defined as signalling an error.



                In many situations, programs will essentially pass through various chunks of data without particularly caring about whether they are valid. If such chunks happen to contain invalid data, the program's output would likely contain invalid data as a consequence. Unless a program is specifically designed to validate all data, and guarantee that it will not produce invalid output even when given invalid input, programs that process its output should allow for the possibility of invalid data within it.



                While validating data early on is often desirable, it's not always particularly practical. Among other things, if the validity of one chunk of data depends upon the contents of other chunks, and if the majority of of the data fed into some sequence of steps will get filtered out along the way, limiting validation to data which makes it through all stages may yield much better performance than trying to validate everything.



                Further, even if a program is only expected to be given pre-validated data, it's often good to have it uphold the above constraints anyway whenever practical. Repeating full validation at every processing step would often be a major performance drain, but the limited amount of validation needed to uphold the above constraints may be much cheaper.






                share|improve this answer




























                  2














                  In general, code should be constructed to uphold the at least the following constraints whenever practical:




                  1. When given correct input, produce correct output.


                  2. When given valid input (that may or may not be correct), produce valid output (likewise).


                  3. When given invalid input, process it without any side-effects beyond those caused by normal input or those which are defined as signalling an error.



                  In many situations, programs will essentially pass through various chunks of data without particularly caring about whether they are valid. If such chunks happen to contain invalid data, the program's output would likely contain invalid data as a consequence. Unless a program is specifically designed to validate all data, and guarantee that it will not produce invalid output even when given invalid input, programs that process its output should allow for the possibility of invalid data within it.



                  While validating data early on is often desirable, it's not always particularly practical. Among other things, if the validity of one chunk of data depends upon the contents of other chunks, and if the majority of of the data fed into some sequence of steps will get filtered out along the way, limiting validation to data which makes it through all stages may yield much better performance than trying to validate everything.



                  Further, even if a program is only expected to be given pre-validated data, it's often good to have it uphold the above constraints anyway whenever practical. Repeating full validation at every processing step would often be a major performance drain, but the limited amount of validation needed to uphold the above constraints may be much cheaper.






                  share|improve this answer


























                    2












                    2








                    2







                    In general, code should be constructed to uphold the at least the following constraints whenever practical:




                    1. When given correct input, produce correct output.


                    2. When given valid input (that may or may not be correct), produce valid output (likewise).


                    3. When given invalid input, process it without any side-effects beyond those caused by normal input or those which are defined as signalling an error.



                    In many situations, programs will essentially pass through various chunks of data without particularly caring about whether they are valid. If such chunks happen to contain invalid data, the program's output would likely contain invalid data as a consequence. Unless a program is specifically designed to validate all data, and guarantee that it will not produce invalid output even when given invalid input, programs that process its output should allow for the possibility of invalid data within it.



                    While validating data early on is often desirable, it's not always particularly practical. Among other things, if the validity of one chunk of data depends upon the contents of other chunks, and if the majority of of the data fed into some sequence of steps will get filtered out along the way, limiting validation to data which makes it through all stages may yield much better performance than trying to validate everything.



                    Further, even if a program is only expected to be given pre-validated data, it's often good to have it uphold the above constraints anyway whenever practical. Repeating full validation at every processing step would often be a major performance drain, but the limited amount of validation needed to uphold the above constraints may be much cheaper.






                    share|improve this answer













                    In general, code should be constructed to uphold the at least the following constraints whenever practical:




                    1. When given correct input, produce correct output.


                    2. When given valid input (that may or may not be correct), produce valid output (likewise).


                    3. When given invalid input, process it without any side-effects beyond those caused by normal input or those which are defined as signalling an error.



                    In many situations, programs will essentially pass through various chunks of data without particularly caring about whether they are valid. If such chunks happen to contain invalid data, the program's output would likely contain invalid data as a consequence. Unless a program is specifically designed to validate all data, and guarantee that it will not produce invalid output even when given invalid input, programs that process its output should allow for the possibility of invalid data within it.



                    While validating data early on is often desirable, it's not always particularly practical. Among other things, if the validity of one chunk of data depends upon the contents of other chunks, and if the majority of of the data fed into some sequence of steps will get filtered out along the way, limiting validation to data which makes it through all stages may yield much better performance than trying to validate everything.



                    Further, even if a program is only expected to be given pre-validated data, it's often good to have it uphold the above constraints anyway whenever practical. Repeating full validation at every processing step would often be a major performance drain, but the limited amount of validation needed to uphold the above constraints may be much cheaper.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 6 hours ago









                    supercatsupercat

                    6,8421725




                    6,8421725























                        0














                        You should always validate incoming data -- user-entered or otherwise -- so you should have a process in place to handle when the data retrieved from this external API is invalid.



                        Generally speaking, any seam where extra-orgranizational systems meet should require authentication, authorization (if not defined simply by authentication), and validation.






                        share|improve this answer




























                          0














                          You should always validate incoming data -- user-entered or otherwise -- so you should have a process in place to handle when the data retrieved from this external API is invalid.



                          Generally speaking, any seam where extra-orgranizational systems meet should require authentication, authorization (if not defined simply by authentication), and validation.






                          share|improve this answer


























                            0












                            0








                            0







                            You should always validate incoming data -- user-entered or otherwise -- so you should have a process in place to handle when the data retrieved from this external API is invalid.



                            Generally speaking, any seam where extra-orgranizational systems meet should require authentication, authorization (if not defined simply by authentication), and validation.






                            share|improve this answer













                            You should always validate incoming data -- user-entered or otherwise -- so you should have a process in place to handle when the data retrieved from this external API is invalid.



                            Generally speaking, any seam where extra-orgranizational systems meet should require authentication, authorization (if not defined simply by authentication), and validation.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 9 hours ago









                            StarTrekRedneckStarTrekRedneck

                            1691




                            1691






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Software Engineering Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f385497%2fshould-you-expect-unexpected-values-from-external-apis%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown











                                Popular posts from this blog

                                GameSpot

                                connect to host localhost port 22: Connection refused

                                Getting a Wifi WPA2 wifi connection