Git Product home page Git Product logo

Comments (91)

svs avatar svs commented on July 28, 2024 5

I don't believe that integration tests are any substitute for controller tests. Here are some reasons for writing proper controller tests

  • The controller code is basically the definition of the API for your app. Do you really want this to not be under test coverage?
  • Not writing tests for controllers means you do not get any of the advantages of TDD in your controllers. This means that your controllers run the risk of becoming bloated and unfocussed.
  • Controllers do specific things, and the test for those specific things should be in the controller. Controllers pass messages, assign data and render/redirect as required. Are you really sure you don't want to test these?
  • "Test what you see" leads to brittle tests because the UI has the potential to change quickly, while the controller logic is much more stable. Coupling your controller test result to your flash messages is an unnecessary complexity

Here's a small recipe for writing painless controller tests. Without the shared examples, it's less than 100 LoC to test all authentication, authorisation, message passing, data assignment and method invocation expectations.

https://github.com/svs/painless_controller_tests/blob/master/spec/controllers/items_controller_spec.rb

from betterspecs.

tilsammans avatar tilsammans commented on July 28, 2024 4

Very much agree. Controller specs quickly become a huge mess and are very brittle.

from betterspecs.

glennr avatar glennr commented on July 28, 2024 3

+1 to svs's approach.

Until recently I was very much in agree-ance with tilsammans, but messy controller specs are a symptom of a different problem with your testing, and more integration tests are not necessarily the solution. Give me a messy set of controller specs over a messy set of integration specs any day of the week please.

If you don't TDD your controller code, then you're doing TDD bad, and you should feel bad.

If you attempt to get coverage through all paths of your controller code using integration specs, then you're gonna end up with a lot of (slow) integration specs. Cue the math: http://www.jbrains.ca/permalink/integrated-tests-are-a-scam-part-1

Also, doing integration tests in Rails generally implies you're using a tool like capybara, but then where do you put your API tests? Not in your capy specs i hope... http://www.elabs.se/blog/34-capybara-and-testing-apis For better or worse, an API can be expressed really nicely in a controller spec.

My approach now: BDD/TDD, but dont get attached to every single integration spec you write - keep only the high value ones and let your unit tests do the rest.

from betterspecs.

Fryie avatar Fryie commented on July 28, 2024 3

I don't want to add to this discussion again, but to state that this principle is not controversial is ... quite a statement. I also just watched a screencast by Gary Bernhard yesterday, where he refactored a big controller test - in the end the controller spec was smaller and more isolated but it was still kept in place.

I'll furthermore refer to this blog post.

from betterspecs.

svs avatar svs commented on July 28, 2024 2

I am not saying you don't need integration specs. I am saying that they are no substitute for controller specs (and model specs).

People much smarter than I have said it much better than I ever can.

http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/
http://www.jbrains.ca/permalink/not-just-slow-integration-tests-are-a-vortex-of-doom

I posit that it is impossible to meaningfully cover all your codepaths using integration specs. The complexity explodes combinatorially when you go from one model to two to four to forty. Add in some controllers, throw in some gems and a layer of javascript and you have several million possible cases to test. This is why we push the testing down to the lowest layer possible and at higher layers we only check increasingly high level functionality such as checking parameters passed or alerts received and so on. Do you, in your integration specs, check whether an item got added to the database?

Take the case of checking for uniqueness of an email. It is possible to do this with an integration test. Then you want both name and email to be unique. And no wait, name should be unique only if the user is a premium users....Congratulations, you just spent 15 minutes setting up this test and a minute running it using capybara each and every time when it's so much simpler to do so at the model level. These tests are no way redundant. There is no other way to meaningfully test your app completely* in any sort of reasonable* timeframe.

What do I mean by that? Well, for a user-facing app (i.e. almost all of them), the API for my app is the UI. The only way to test the UI is through integration and acceptance tests. Controller tests are simply useless for this: a controller test can tell you that UsersController#show behaves as you expect, but cannot tell you whether GET /users/1 behaves as you expect.

Agreed. But UI tests are UI tests, router tests are router tests, view tests are view tests and integration tests are integration tests. They all have different purposes. Integration tests are called so because they are only to test that the various components of your application have been integrated correctly. Run the requests through the full stack of the app to test whether routers, controllers, models and frontend clients are all behaving well with each other. An integration test's purpose is not to test validation on models or basic correctness of business logic. Down that road lies madness.

The user cares not a whit whether UsersController#show does the right thing, or whether GET /users/1 invokes UsersController#show; rather, he cares that GET /users/1 does what he expects, whether by means of UsersController#show or TreesController#climb.

The user cares not a whit whether you use Rails or PHP or have a team of pigeons randomly choosing answers. But the developers do care. Tests are the best form of documentation there is. When a unit test fails it gives an error message that can lead developers to solve the problem quickly. When an integration spec fails it just say "Expected page to have content 'success' but it does not". Great. Now go looking for the error somewhere else.

In the end, of course, follow whatever testing methodology gives you best results. It's just that I can't see how one is to test every aspect of my system using only integration tests. Or are you suggesting that we let go of testing the whole system and concentrate only on the "important" bits?

from betterspecs.

ryanmtaylor avatar ryanmtaylor commented on July 28, 2024 2

Testing a controller can fall into multiple categories!

  1. Testing controllers as an integration test, before they form a UI is one thing -- this clearly is not "testing what you see".

  2. Closely related, testing a controller (like an API) because the application has to interface with something is Contract Testing. In certain scenarios you may want to do this, like if another service wants to use your API.

  3. Testing a database write/update won't always fall under "unit testing a model", but when it does you're better testing the model.

  4. Testing a more complex database write (across models) would suggest your program has high level actions that the user (or system) is trying to accomplish. BDD/DDD would say these are features and domain-actions. They directly come from business requirements. You should test business requirements rather than the ins and outs of the controller itself. Test the business requirements these imperative Commands because (a) we must ensure and test that we meet business requirements (b) features are less likely to change than a general controller or integration test. If you're familiar with Facebook's Flux pattern or DDD the idea of Commands as general endpoints that map to a user/system goal may make more sense.

If you do have to test the ins and outs of the controller, you're back to case 1, where you're defining a contract.

Once again: You should unit test, test business requirements (commands), acceptance test, and then integration test -- in that order, because that's the order in which things are liable to change.

from betterspecs.

brunofacca avatar brunofacca commented on July 28, 2024 2

@marnen: I had given some thought to the points discussed in the previous comments and tried out your approach while developing a small Rails app over the last few weeks. I did not write any controller tests for web views. I have changed my mind and now agree that feature specs can replace controller specs in web apps if the controllers are skinny enough (so if a feature spec fails due to a controller problem, the cause would be quick to pinpoint). After all, everything that we may test in a controller spec is also tested in a well-written feature spec. Although feature specs are much slower than controller specs, our apps need them for UI testing anyway, so the slowness is inevitable.

from betterspecs.

route avatar route commented on July 28, 2024 1

I can't believe that integration tests might replace controller tests because integration tests are used to test the interaction among any number of controllers. Generally speaking why we should use integration test to cover just one action in our controller, it is task for controller tests. So write tests for your controllers, write tests for interaction between controllers, this my point.

from betterspecs.

tilsammans avatar tilsammans commented on July 28, 2024 1

With controller tests it always seems inevitable to mess with internals, use mocking and stubbing and this ties the test very much to the implementation. For a run of the mill CRUD controller writing a controller spec is too much fanfare in my opinion. I just care about the interaction and the end result and these are handled fine on a higher level. Integration tests are not used to test many controllers, they are used to test the entire MVC stack and writing them for a single controller (a single user scenario actually) is a good thing. When I have the user scenario covered a plain controller spec is superfluous. I agree that they can be useful for unhappy paths.

from betterspecs.

sdeframond avatar sdeframond commented on July 28, 2024 1

We have to do integration tests anyway, either manually or automatically. We have to do it because it is the integrated product that is billed to the customer. Unit test are merely a convenience for the developers, albeit a very useful one.

Manual integration tests are slower than automated integration tests. So as soon as the application becomes a bit complex it makes sense to have it highly covered with automated integration tests. If those are too slow for your workflow then it is probably a good idea to run them on some CI server. Even then some amount of manual testing is useful.

Unit tests (models and controllers) are very useful to developers because they are faster to run than integration tests and because they encourage good design, but they are redundant.

from betterspecs.

Fryie avatar Fryie commented on July 28, 2024 1

@ryanmtaylor Just answer me this question: what is a controller's responsibility? Because before you answer that question you can't answer whether and how controllers should be tested.

My opinion is that Rails controllers by design / common usage do a really, really poor job at adhering to the SRP. Are they supposed to cover business logic? Parameter handling? Directing routes to specific entrypoints?
Different projects will use controllers in different ways. If your controllers are extremely dumb (i.e. they do exactly zero things before piping everything into, say, a Trailblazer operation), then by all means don't unit test them. But that's probably not how 95% of Rails projects are set up. As soon as there is even just one strong parameters method in there, you basically have parameter validation, and then that's logic that should be covered in your tests.

from betterspecs.

marnen avatar marnen commented on July 28, 2024 1

@brunofacca:

Although feature specs are much slower than controller specs, our apps need them for UI testing anyway, so the slowness is inevitable.

That's basically my view. I'm glad it worked out in practice for you!

from betterspecs.

marnen avatar marnen commented on July 28, 2024 1

@aesyondu Yikes! Looks like Piotr Solnica's site is now at http://solnic.codes, for what it's worth.

from betterspecs.

josephlord avatar josephlord commented on July 28, 2024

Largely agree and integration tests can ensure that the desired behaviour is working correctly. Do you see a case for negative tests in the controller to ensure that inappropriate information is not being disclosed (potentially in other formats)? I've been trying to check many of the security aspects at the controller level and the functionality aspects at the integration level.

from betterspecs.

oesmith avatar oesmith commented on July 28, 2024

+1 to @josephlord

There's definitely a case for using controller tests for bits of the interface you can't see.

If you're having to mixin Rack::Test::Methods to write your spec, you're really writing a controller spec.

from betterspecs.

andreareginato avatar andreareginato commented on July 28, 2024

I've to admit that I've wrote this guideline because lately I was writing JSON API and it felt just right.
With more visual UI this idea could change, also if I should get more examples about.

About this need I always think about a rails app. Actually what I do is to collect all common functionalities (JSON format, security, access, errors) as shared examples, as @svs pointed out. The difference is that I do it to check the final result.

But after all those comments I'm getting rather curious. Can you guys give me some real examples where something can't be done with integration tests, but it can be done with controller testing?

Actually my experience with controller test wasn't nice for two main reasons.

  • It was getting a bit messy
  • I had the unpleasant feeling I was duplicating lot of test

The last one in particular, now is gone.

from betterspecs.

route avatar route commented on July 28, 2024

With controller tests it always seems inevitable to mess with internals, use mocking and stubbing and this ties the test very much to the implementation.

You don't have to use mocks if you don't need them, it's related to all kind of tests.

Integration tests are not used to test many controllers, they are used to test the entire MVC stack and writing them for a single controller (a single user scenario actually) is a good thing.

Sorry I didn't mean exactly a few controllers, I meant a scenario.

from betterspecs.

route avatar route commented on July 28, 2024

Can you guys give me some real examples where something can't be done with integration tests, but it can be done with controller testing?

Your question sounds a bit confusing for me. As I think something can be done easy with controller test rather than integration test. For example: testing auth and rights, redirects, redirects depending on headers, sending special headers, before/after filters which can do "invisible" on page work.

from betterspecs.

rimidl avatar rimidl commented on July 28, 2024

+1 to @route, @svs. I think the same. Let me add something.

We shouldn't forget the important idea - "Divide and Conquer.".

We should use integration specs for rough, surface testing. We have to pretend and behave yourself like and usual user, we should think and do the same things.

We should use controllers specs to check details of realizations. This level of specs lets us get more concentration on implementation rather than integration's specs and lets us putty narrow places beyond integration's tests..

Let me note that view's specs may be very helpful too, in some specific cases of course.

from betterspecs.

svs avatar svs commented on July 28, 2024

Small blog post about this topic: http://svs.io/post/32926616364/painless-controller-testing-in-rails

from betterspecs.

pepe avatar pepe commented on July 28, 2024

@andreareginato +1

from betterspecs.

Govinda-Fichtner avatar Govinda-Fichtner commented on July 28, 2024

Integration tests with cucumber/capybara usually tend to be much slower compared to good written unit tests... so alone for the quick health feedback they give me about my app I would not do without them... that seems to be an aspect that a lot of people seem to forget lately when they say they are just doing integration tests...

from betterspecs.

Naomarik avatar Naomarik commented on July 28, 2024

Disclaimer: I'm a newb, let me know if I'm wrong on any of these points.

If following this advice how does the author intend on testing their API for apps that perform requests for JSON or XML?

Also it would seem a lot faster testing CRUD operations by directly interacting with the controller since you're able to set user login states rather than having to hit your whole webapp.

For instance, in a controller spec you could have

session[:user_id] = FactoryGirl.create(:user).id
post :create, :world => FactoryGirl.attributes_for(:world, :brand_id => brand.id)

Testing this in capybara would have to hit login section of your site, navigate to the page, then fill in each input of your form. This would turn a 200ms test to few seconds.

I think integration tests have their place to ensure your webapp is generally working, but more rigorous tests on what input is acceptable seem to be much more efficient when done in controller specs both in testing speed and reducing code complexity.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@sdeframond Exactly. Unit tests are less useful than integration/acceptance tests. They're great for models, but not for controllers, where they're too complicated and don't test what you really want tested.

@svs:

I don't believe that integration tests are any substitute for controller tests.

I think you're wrong, as I'll explain below.

Here are some reasons for writing proper controller tests

The controller code is basically the definition of the API for your app. Do you really want this to not be under test coverage?

No. That's why I don't trust it to controller specs.

What do I mean by that? Well, for a user-facing app (i.e. almost all of them), the API for my app is the UI. The only way to test the UI is through integration and acceptance tests. Controller tests are simply useless for this: a controller test can tell you that UsersController#show behaves as you expect, but cannot tell you whether GET /users/1 behaves as you expect. The user cares not a whit whether UsersController#show does the right thing, or whether GET /users/1 invokes UsersController#show; rather, he cares that GET /users/1 does what he expects, whether by means of UsersController#show or TreesController#climb.

In other words, controller tests don't actually test what you care about. They don't test what you see.

(I do occasionally write controller tests for controllers that manage HTTP service endpoints instead of Web pages, but even there I usually find acceptance tests more useful.)

Not writing tests for controllers means you do not get any of the advantages of TDD in your controllers. This means that your controllers run the risk of becoming bloated and unfocussed.

Wrong. I get the advantages of TDD for my controllers by testing what I see through my integration tests. And my controllers aren't bloated or unfocused; it's rare that I have more than about 3 lines of code in any action.

Controllers do specific things, and the test for those specific things should be in the controller. Controllers pass messages, assign data and render/redirect as required. Are you really sure you don't want to test these?

I do want to test these things, but not directly, just as I don't want to test my private model methods directly. They are internal implementation, and as such are sufficiently tested by means of testing the interfaces that I do care about. Test what you see.

"Test what you see" leads to brittle tests because the UI has the potential to change quickly, while the controller logic is much more stable. Coupling your controller test result to your flash messages is an unnecessary complexity

No. If the user cares about the flash message, it should be tested (and if not, don't bother testing it, but focus on whatever the user does care about). That's not unnecessary complexity; it's testing what's important to the user. And in my experience, controller tests are far more brittle than Cucumber acceptance stories.

Here's a small recipe for writing painless controller tests. Without the shared examples, it's less than 100 LoC to test all authentication, authorisation, message passing, data assignment and method invocation expectations.

https://github.com/svs/painless_controller_tests/blob/master/spec/controllers/items_controller_spec.rb

That's beautiful but useless. You're testing that get :show does the right thing, which is completely irrelevant, because you're not testing that you're actually calling get :show where you think you are, at least as far as I can tell.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

Also, one more thought: fast tests are not useful if they don't actually test what you need tested.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@svs:

I am not saying you don't need integration specs. I am saying that they
are no substitute for controller specs (and model specs).

And I'm saying you're wrong.

People much smarter than I have said it much better than I ever can.

http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/

I have read that article. He doesn't back up his assertions, and doesn't
provide any of the sort of tests he's talking about. For these reasons, I
do not think it is worth taking that post seriously.

http://www.jbrains.ca/permalink/not-just-slow-integration-tests-are-a-vortex-of-doom

I'll look at that.

I posit that it is impossible to meaningfully cover all your codepaths
using integration specs.

True. And the same is also true of unit specs, at least once you try
for anything past C0. No real difference here.

Do you, in your integration specs, check whether an item got added to the
database?

Generally not directly (I save that for model specs, which are technically
integration specs but are not usually thought of that way in Rails).
Instead, I check that the item shows up on the list of items.

Take the case of checking for uniqueness of an email. It is possible to do
this with an integration test. Then you want both name and email to be
unique. And no wait, name should be unique only if the user is a premium
users....Congratulations, you just spent 15 minutes setting up this test
and a minute running it using capybara each and every time when it's so
much simpler to do so at the model level.

I do this sort of testing at the model level, and I'm not sure what gave
you the idea that I don't.

I'm pressed for time right now and will attempt to respond to your other
points later.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

Just looked at the jbrains.ca link. It's interesting, but appears to be talking about a different type of use of integration tests, in a different programming context. I don't think it brings much useful (pro or con) to the present discussion—at least I don't see the relevancy to Rails-style tests.

More later.

from betterspecs.

Dariusz-Choinski avatar Dariusz-Choinski commented on July 28, 2024

It seems to me that relying only on integration testing is a bad practice. Integration tests always use views, what about features that do not have views, that are happening in the background?. If you are testing only what you see, it means that you do not test what you do not see. There are many such things such as e-mail scheduling, payment, data transfer, file manipulation, format conversion, etc... that can be overlooked in testing, if you believe that controller testing is not necessary. Controller testing and integration testing are a parts of application testing. It is not well if you cut a part of application and throw it away, saying "O! I don't care about it". Can be happen that you release the application with plenty of bugs, but you are not aware of it, because you didn't care about it. I agree with opinion, that in case of CRUD actions, integration testing is enough, even better in simply cases, but I disagree with opinion that integration testing is enough in any case.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@Dariusz-Choinski:

Integration tests always use views, what about features that do not have views, that are happening in the background?

If they are features (for the user), there will always be some sort of output, be it HTML views, e-mail, JSON, or whatever. You test that output in your integration tests; it's not just views.

If there is no output, then the user never sees them, so there is no direct value for him, and they're probably best either unit-tested in isolation or (probably better) integration tested as part of whatever they support that is of value for the user.

You're complaining about an overly narrow type of integration testing. No one is actually advocating that, AFAIK.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

The rest of my response to @svs:

Agreed. But UI tests are UI tests, router tests are router tests, view tests are view tests and integration tests are integration tests. They all have different purposes. Integration tests are called so because they are only to test that the various components of your application have been integrated correctly. Run the requests through the full stack of the app to test whether routers, controllers, models and frontend clients are all behaving well with each other. An integration test's purpose is not to test validation on models or basic correctness of business logic. Down that road lies madness.

Mostly agreed. Validation of models lives in the model and can be easily verified by unit testing the model. To the extent that business logic lives in the model or other atomic object, same answer. I'm not disputing this point. I do unit-test my models quite extensively. That makes sense: they're "heavy" objects with lots of state and behavior. Properly skinny controllers are not (see more on this point below).

However, if for some reason I could only have one kind of tests, I would trust my integration tests to show me that the application worked correctly as a whole. I would not trust my unit tests to do so, because they cannot.

The user cares not a whit whether you use Rails or PHP or have a team of pigeons randomly choosing answers. But the developers do care. Tests are the best form of documentation there is.

But they should document interface, not implementation. When the developers need to know about implementation, they either inspect the code or write separate developer docs. They don't (or shouldn't) look at the tests as implementation documentation: that's very much not the point of tests. This is why refactoring works so well with tests: the tests specify the interface, and you can change the implementation as you like with the confidence that the interface hasn't been broken.

When a unit test fails it gives an error message that can lead developers to solve the problem quickly. When an integration spec fails it just say "Expected page to have content 'success' but it does not". Great. Now go looking for the error somewhere else.

That has not been my experience at all. Even in quite complex applications, I can generally tell exactly where the error is from my integration test failures. When I can't, it has always been due to complex interactions between controllers and models that wouldn't have been caught by controller unit tests anyway!

Perhaps your integration tests are written at a different level of granularity if you're having a problem in this respect?

It's just that I can't see how one is to test every aspect of my system using only integration tests. Or are you suggesting that we let go of testing the whole system and concentrate only on the "important" bits?

I don't test every aspect of my system, I suppose—that's not feasible in a system of any complexity. Rather, I test every aspect of my system that the user cares about. If something has not been specified in a user story, then I believe its behavior is undefined, so I'm not going to write a test for it. (Before you accuse me of being slipshod here, note that part of my process of writing user stories is asking the user detailed questions about what the system should do in every case that the user or I can think of. Those answers get written into tests.)


Another reason that I don't find much utility in writing Rails controller tests is pure practicality. I follow "skinny controller, fat model" as a matter of course, which means that my controller actions are simply brain-dead glue code. Anything complex in a controller action gets refactored into a model method. This means that my controller actions are typically no more complex than

def index
  @user = session[:current_user]
  @posts = @user.posts.published
end

There's nothing interesting to test here in isolation: if you try, you'll wind up mocking or stubbing so much that it isn't really a test. So you test Post.published, and you test the integration. At least, that's how it looks to me. I see how to write a unit spec for this action; I just don't see how to write a meaningful unit spec for it.

I'd actually say—and I hadn't thought about it this way before—that if your controller methods benefit from unit tests, then that probably means your controllers are too fat.

from betterspecs.

josephlord avatar josephlord commented on July 28, 2024

@marnen Where do you place (and test) access controls? Surely controllers (and their tests) are the right place to put the security boundary and certainly to test it.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@josephlord What sort of access controls are you referring to? Authentication? Authorization? Something else?

from betterspecs.

marnen avatar marnen commented on July 28, 2024

Actually, I guess I can expand on that before @josephlord replies.

First of all, anything that affects the user's experience needs to be in an acceptance/integration test so that it's verified to be present where it's supposed to be. That's a given.

Now, as far as authorization goes, I like putting as much of that as possible in the model (Cancan's approach is good here): surely the model (or a specific Permission model) should be smart enough to determine who's allowed to look at it. I think that it makes sense to either have something like if @user.can? :read, @post, if @post.reader? @user, or even if Permission.for(@user, @post, :read), so that all the logic is in model methods, and is tested at the model level. At that point, I don't really see what a controller test gets you that hasn't already been covered by the model and integration tests: the controller is just calling model methods anyway, and has no logic of its own (except perhaps a redirect if @user.can? :read, @post turns out to be false).

Authentication does probably belong more in the controller than authorization, but here again, I'm not sure the controller methods do much that's interesting to test in isolation.

Does that answer your questions?

from betterspecs.

josephlord avatar josephlord commented on July 28, 2024

@marnen Pretty much answers it. I would have the model knowing who can access specific objects but wouldn't expect it to check who the current user is on requests as I think that is a controller job. I also allow different parameters based on who the user is at times (e.g. setting password and password_confirmation are only allowed for the current user).

These functions should be tested in the controller (particularly the unauthorized cases) as a malicious actor could bypass the HTML form tested by the integration testing and insert additional parameters.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@josephlord:

I would have the model knowing who can access specific objects but wouldn't expect it to check who the current user is on requests as I think that is a controller job.

Exactly. The controller passes the user to the model method as a parameter.

I also allow different parameters based on who the user is at times (e.g. setting password and password_confirmation are only allowed for the current user).

So your authorization logic actually has something like

class User
  def can_change_password?(user)
    if user == session[:current_user]
      # ...
    end
  end
end

?

That's a bit tough to know what to do with; I think I'd call it poor MVC myself, since the model normally shouldn't be touching the session (unless you have a model like Authlogic's UserSession class, whose sole purpose is to abstract the session features). I think I'd probably do

class User
  def can_change_password?(user)
    user == self
  end
end

which is easier to test at the model level and keeps MVC layers more separate.

These functions should be tested in the controller (particularly the unauthorized cases) as a malicious actor could bypass the HTML form tested by the integration testing and insert additional parameters.

Controller testing won't actually help here, since it does not guarantee that you're testing the action that the malicious actor would be requesting: the malicious actor is not requesting a controller action, but rather a URL. So do this as an integration test by driving curl or something.

The fundamental point is this: users request URLs, not controller actions, and therefore testing a controller action cannot, by itself, verify that the user will see what's in that action, because it does not test that the user ever gets to that action from a given URL.

from betterspecs.

josephlord avatar josephlord commented on July 28, 2024

@marnen Actually my user controller (using Rails4/strong_parameters) has this logic:

    def model_params(newModel = false)
      if @user_user == current_user || newModel
        params.require(:user_user).permit(:loginname, :displayname, :email, :password, :password_confirmation)
      else
        params.require(:user_user).permit(:loginname, :displayname, :email)
      end
    end

The controllers will in other places ask the model if the user can take an action before doing it. The model doesn't ever know the current user and doesn't get passed to the model (except where that is really the subject of the operation).

Controller testing won't actually help here, since it does not guarantee that you're testing the action that the malicious actor would be requesting: the malicious actor is not requesting a controller action, but rather a URL. So do this as an integration test by driving curl or something.

Actually it is the other way round. Someone may find a way to access the controller by URLs that you don't expect (different format, different request type etc.) but they need to hit a controller at some point so that is the choke point for requests and the right place for tests to ensure safe parameters and correct access rights. You could test with Curl but that would be less complete (e.g. if Rails 5 adds a new request format), harder and slower.

The fundamental point is this: users request URLs, not controller actions, and therefore testing a controller action cannot, by itself, verify that the user will see what's in that action, because it does not test that the user ever gets to that action from a given URL.

I largely agree for testing the behaviour that you want to see but there is still value in testing the unexpected where you don't really care what the response looks like but you don't want it to succeed.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

Wow, what a fascinating discussion!

@josephlord:

Actually my user controller (using Rails4/strong_parameters) has this logic:

IMHO that's both too much duplication and too much logic in the controller —one of the (many) things I dislike about strong parameters is the fact that it makes it difficult to keep controllers skinny. I'd tend to refactor all this to a model method, perhaps in the following way (though it's difficult to know for sure without more context):

# controller
def model_params(new_model = false) # or use new_record? method and get rid of new_model parameter
  params.require(:user_user).permit *@user_user.permitted_params(current_user, new_model: new_model)
end

# model
def permitted_params(user, new_model: false)
  params = [:loginname, :displayname, :email]
  params += [:password, :password_confirmation] if (user == self) || new_model
  params
end

This has several benefits:

  • It puts the knowledge of which fields are writeable into the model, where it really belongs.
  • It keeps the controller skinny. Notice that all the conditional logic has been removed from the controller method, which is now a one-liner.
  • It is more easily unit-testable: User#permitted_params is public (of necessity), and on a model object that can be tested in isolation. UsersController#model_params had better be private, so you can't/shouldn't really unit test it even if you are doing controller unit tests.

(BTW, why newModel and displayname instead of new_model and display_name?)

Actually it is the other way round. Someone may find a way to access the controller by URLs that you don't expect (different format, different request type etc.)

Example? Unless you have very poorly written wildcard routes, I can't think of a way that this would be possible.

but they need to hit a controller at some point so that is the choke point for requests and the right place for tests to ensure safe parameters and correct access rights.

No, in fact requests within the virtual directory of your Rails app don't need to hit a controller at some point (potentially). For example, remember that static assets have URLs that look like Rails URLs, but (at least in production mode) usually aren't served by Rails. That is, there's no obvious difference from the outside between http://myrailsapp.com/users/1.json (UsersController#show) and http://myrailsapp.com/users/1.png (public/users/1.png).

Also, even if we disregard that and consider the controller as a choke point… which controller? ApplicationController seems logical, but can you guarantee that all your controllers inherit from it? (Yes, they should do so by Rails convention, but do they?)

That last may be slightly farfetched. The point I'm trying to make, though, is that I don't think you can safely prove this with unit tests. Since the user is requesting a URL, possibly maliciously, your tests need to act like a user, request that URL, follow the full stack through, and examine the actual behavior of all the parts, not just that of the parts that you thought might be worth unit testing.

You could test with Curl but that would be less complete (e.g. if Rails 5 adds a new request format), harder and slower.

I'm not going to design my tests for Rails 5, because Rails 5 doesn't exist yet. When it does, I will update my tests. YAGNI.

Testing with Curl (or equivalent) is the only proper approach I can come up with in this particular use case. Other approaches may be faster, but they gain that speed by making assumptions about what needs to be tested; these assumptions may or may not be valid. Who cares how fast your tests are if they're incorrect?

Am I missing something here? How would you construct a test here that didn't exercise the full stack?

there is still value in testing the unexpected where you don't really care what the response looks like but you don't want it to succeed.

Right! So just test for an appropriate HTTP status code and don't look at the rest of the response. You can (and IMHO should) use a full-stack test to do this.

from betterspecs.

svs avatar svs commented on July 28, 2024

@marnen If I understand you correctly, your approach is to put everything into the model, unit test the model and write integration tests for the skinny controller. Just wondering how you test for the following in the lifetime of a curl request

  • Did my mail get sent?
  • Did an audit trail get created?
  • Was an item added to the database?

Do you parse the flash messages? Personally I have found flash messages to be no guarantee of anything.

Also, assuming you make (as I do) many small and focussed classes, how do you check if the right one got called? For a pseudo example, let's say you search differently based on user type.

  ...
  search_model = "#{current_user.type.camelize}Search".constantize.new(params).search
  ...

How do you know you haven't called AdminSearch instead of GimpSearch? Setting up integration tests for each user type is very time consuming. In this case, I like that I can say

  AdminSearch.any_instance.should_not_receive(:search)

Checking whether messages were sent to the correct recipient is a critical part of application testing.

Can you share how you do curl testing? Is it from inside the Rack environment like capybara etc? Or are you completely isolated from the apps internals while testing? And why curl? What about the js in your frontend? Do you write capybara specs as well?

Your comment

This has several benefits:
It puts the knowledge of which fields are writeable into the model, where it really belongs.

illustrates your stance. Knowledge of which fields are writable by which user is definitely not a model concern according to me. Different fields are writable by different people under different circumstances. The model is just a way to persist data to the database. I think I begin to see the issue at hand here. It's that we write apps in fundamentally different ways.

From the above discussion, it seems you put everything into the model. In this circumstance, not having controller tests is probably ok, though I would wager that your models are often over 100 lines long and you will soon break them up into Service objects, Presenters and so on. When one controller action needs to mediate between these various classes, I think there is tremendous value in being able to peer inside your controllers and verify that they are actually doing what you think they're doing.

End results can be deceptively similar and setting up tests for each possible end result is way more time consuming than checking if the right message got sent to the right receiver.

And yes, wrap everything up with a nice capybara spec suite so you can deploy with confidence.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@svs:

@marnen If I understand you correctly, your approach is to put everything into the model, unit test the model and write integration tests for the skinny controller.

Not quite everything in the model. But the controller shouldn't be doing much except calling simple model methods and passing the results on to the view. It can also deal with the session and a few other things that don't fit neatly into model or view responsibilities. This is the usual Rails MVC wisdom ("skinny controller, fat model!"), and I think it's correct. (I use gems like Draper and Cells a lot, too.)

Just wondering how you test for the following in the lifetime of a curl request

Well, I don't use Curl directly in my integration tests for the most part. I was specifically thinking of it for @josephlord's situation of wanting to try a malicious attack. For my usual tests, I use Capybara, driven by Cucumber.

Did my mail get sent?

I use capybara-email or similar.

Did an audit trail get created?
Was an item added to the database?

In the former case, I'd definitely test this through the view (navigate to where the audit trail is displayed…). In the latter case, I'd probably do likewise, but I've been known to have my Cucumber stories check for the presence of DB records instead. I don't like that, though, because the user cares about interface, not implementation.

Do you parse the flash messages?

Occasionally. Usually I rely on more detailed information.

Also, assuming you make (as I do) many small and focussed classes, how do you check if the right one got called?

I don't. I care about the behavior, not which class got called. So I'd set up a scenario that would only succeed if the right one got called, and test through the interface.

In this case, I like that I can say AdminSearch.any_instance.should_not_receive(:search)

And that would be exactly the wrong thing to do, because you've just tied that test to a particular implementation.

Checking whether messages were sent to the correct recipient is a critical part of application testing.

No. Checking that the behavior is correct is a critical part of testing. What messages were sent where is implementation. I deliberately don't write tests that check that, because I don't want my tests to be tied to a particular implementation. A major point of tests is that you can completely change your implementation and still have them pass. You lose that benefit as soon as you call should_receive.

Can you share how you do curl testing? Is it from inside the Rack environment like capybara etc?

As I said above, I don't do raw Curl testing; that was a suggestion for one specific use case. I use Capybara, driven by Cucumber.

What about the js in your frontend? Do you write capybara specs as well?

Yes (capybara-webkit totally rocks!), and I unit-test my JavaScript/CoffeeScript with Jasmine (and jasmine-headless-webkit) as well.

Knowledge of which fields are writable by which user is definitely not a model concern according to me. Different fields are writable by different people under different circumstances.

Right. And the model should be smart enough to figure out what those circumstances are.

The model is just a way to persist data to the database.

Good God, no! The nature of ActiveRecord sort of promotes this misconception, but that's actually a severe perversion of Rails-style MVC. The model in Rails MVC represents the domain object. It is allowed to handle persistence, but there is no requirement that a domain object be persisted to the DB—in a typical application, DB persistence will make sense for some domain objects, but not for others.

The model class, then, encapsulates and manages all the state and behavior relating to the domain object it represents. If that includes managing DB persistence, great. To the model class' collaborators, the DB persistence is a private implementation detail; your model's public interface would (ideally) change little or not at all if you added or removed DB persistence.

BTW, note that ActiveRecord models are arguably more coupled to DB persistence than they should be. DataMapper and Mongoid, among others, work through inclusion, not inheritance, and probably keep the model classes less tightly coupled to the DB implementation than AR does. This is a Good Thing, and the coupling should probably be considered a design flaw in AR.

From the above discussion, it seems you put everything into the model. In this circumstance, not having controller tests is probably ok, though I would wager that your models are often over 100 lines long and you will soon break them up into Service objects, Presenters and so on.

Absolutely! That's good OO design: when a class gets bloated, split it up into other classes. All the controller should do is call these objects (with maybe a simple conditional here and there), and pass the data on to the view. Skinny controller, fat model.

When one controller action needs to mediate between these various classes, I think there is tremendous value in being able to peer inside your controllers and verify that they are actually doing what you think they're doing.

And I think that at that point your complex operations, spanning multiple classes, probably indicate that you have a new domain object that should be extracted into a class of its own somewhere in the model layer. Got an example we can dissect?

End results can be deceptively similar and setting up tests for each possible end result is way more time consuming than checking if the right message got sent to the right receiver.

Time-consuming? Sure. It's also the only correct approach I know of. If I use should_receive, my test is brittle, and may change every time I rename a method or move it to another object. Brittle tests destroy a major benefit of testing, in that I can't use my tests to help me refactor. OTOH, if I test only the public interface, then I'm guaranteed to have my tests continue passing as long as I don't break that interface—regardless of how it's implemented. That means my tests verify the public contract, not some irrelevant implementation details. This makes them more powerful and more useful. This is the power of testing what you see.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

BTW, a bit of empirical evidence in favor of my viewpoint: I'm currently refactoring view and controller specs to Cucumber stories in the Quorum project (see http://marnen.lighthouseapp.com/projects/20949-quorum/tickets/54-remove-remaining-controller-and-view-specs). In the process of doing this, I have so far discovered two features that didn't actually work despite their view specs passing—because the view specs did not, and could not, test them sufficiently in isolation, so they misled me into thinking the application was well tested when it wasn't.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

Given the amount of arguments on both sides I feel like this should be neutral at best. At the very least the language should be softened up:

People supporting the need of testing controllers will tell you that your integration tests
don't cover all use cases and that they are slow. Both are wrong.`

Reads as really dogmatic compared to the other even-handed suggestions on the list.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett The whole point of betterspecs is, I think, to formulate best practices for RSpec—that is, to be dogmatic. Why should this be softened? Indeed, how could it be softened and still be useful? How can best practices be "even-handed", as you suggest, since their very purpose is to recommend one thing over another?

In a field like software engineering where we can—and must—base decisions on demonstrable logic, I think being dogmatic is kind of a good thing, provided that the dogma is correct. (And if it's not correct, we should correct it!)

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen You raise an excellent point, betterspecs certainly would be less useful if it didn't take a firm stand on style one way or the other.

I guess the reason this one initially stuck out to me as "dogmatic" (perhaps to strong of a word) was that it was speaking more about the process of testing in general rather than rspec specific syntax, which is what most of the other advice is.

"Betterspecs" to me means better rspec tests in general, not just for idiomatic Rails apps. With that in mind I think it might be at least worth noting that there are some scenarios where testing a controller in isolation beneficial. Cheers.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett:

With that in mind I think it might be at least worth noting that there are some scenarios where testing a controller in isolation beneficial.

That would itself be controversial. I'm having a hard time thinking of any proper use case for a controller unit test in Rails, assuming your controllers are properly skinny. Do you have an example? For myself, after about 6 years of Rails development (writing controller tests for the first couple years, then discarding them as I came to this realization), I've yet to come across anything that controller tests are actually good for. (I've occasionally used them for API endpoints, but I've mostly moved away from that practice.)

IMHO controller tests in Rails are a fool trap: writing them makes you feel like you're doing something, but in fact it's needlessly painful and accomplishes nothing remotely useful, unless your controllers have too much logic in them. If they do have too much logic, then you should refactor them—after having written integration tests to make sure that the application's behavior stays the same.

When I come on board a new project, I consider controller tests to be a serious code smell: in addition to being painful to write, they provide a false sense of security, because they test, but they test the wrong things. If I can't get rid of them, I at least don't write any more.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen Thanks for responding! Here are a few areas where I have found controller specs beneficial in the past (I'd be interested in hearing your thoughts)

  • JSON APIs: Testing things like content negotiation, things you could also do with cURL, but it's nice to have it live with the other specs
  • SOA: Multiple independent apps communicating with each other, it's simple just to stub that call and make sure the correct service was called when the controller action was executed. Really any service at all, external APIs, back-office reporting, anything you need to integrate but don't have visibility into.

As an aside, the phrase "integration test" seems to be used differently in the Rails community than I am used to, it seems to be used in the context of what I would call an "end-to-end" or "full-stack" test, is that a correct assessment? Coming from the .net landscape integration test just meant a test that checked how any two components fit together.

Thanks!

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett:

JSON APIs are generally better tested with integration tests than with controller tests, since they are user-facing. As with any other controller tests, they don't test what you need tested in this case.

I'm not quite getting your SOA example, and I'd love it if you'd elaborate, but if you mean what I think you mean, then that sounds like improper testing practice. (In fact, it's alarmingly common for apparent use cases for controller tests to turn out to be cases where someone didn't know a few integration testing tricks that would have made life easier.)

As an aside, the phrase "integration test" seems to be used differently in the Rails community than I am used to, it seems to be used in the context of what I would call an "end-to-end" or "full-stack" test, is that a correct assessment? Coming from the .net landscape integration test just meant a test that checked how any two components fit together.

An integration test is a test that checks how multiple units fit together, yes. There's not much partial integration testing in the Rails community, though; generally you either test one component or the full stack. What would be the point of a partial integration test, anyway? I'm having trouble seeing when this would be useful. (Exception: people like Avdi Grimm consider AR unit tests to be partial integration tests, since they involve the DB. They also write AR unit tests that don't involve the DB. While I appreciate the theory, I think that the design of AR may make this more trouble than it's worth in practice.)

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen

Yes, I think Avdi's definition is what I am used to hearing in other ecosystems, thanks.

I'd like to hear more about some of the tricks you mentioned involving integration testing for services, from my limited experience it's been a great pain. To help clarify, I guess an example in the Rails world would be several different Rails apps communicating with each-other.

From the examples presented in the original article it seems like the author is recommending using a web driver to hit the controller and then check the results somewhere on another page. Sounds fine for a single app that only depends on itself, but setting all of that infrastructure up for a test can be a lot of work with may distributed systems. Would it not be better to just stub that call to that service in the controller and check and see if it's called when that action is hit?

Great discussion!

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett:

To help clarify, I guess an example in the Rails world would be several different Rails apps communicating with each-other.

Sorry, but that (and a mention of an article that I don't think you actually cited) still tells me little about what you're trying to test. Got some controller code?

Or are you looking for something like WebMock?

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen Sorry, by article I meant this one we are commenting on right now :P (post? issue?). I don't really have a concrete example, off the top. I guess I was wondering how you would do a full, end-to-end integration test with Capybara (mentioned in this betterspecs post) when your app has to call out to many different services (hence why SOA came to mind as an example). Would you set up test instances of all of those services involved to get the full integration test correct?

(if I don't respond right away, I'll catch you in the morning 👍)

from betterspecs.

marnen avatar marnen commented on July 28, 2024

WebMock and VCR automate the setup of test services. Go follow the link in my last comment. :)

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett:
Also, something I missed on first reading:

Would it not be better to just stub that call to that service in the controller and check and see if it's called when that action is hit?

Probably not. The more methods you stub in your tests, the closer your tests are coupled to your implementation, and the less they will help you refactor your implementation. Besides, stubs can be wrong, and often are.

WebMock actually does stub, but it stubs low-level methods that are rarely called directly, so the tests are not brittle. Also, VCR helps make sure the tests actually reflect reality. This is generally the proper approach.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen Thanks for the detailed responses, WebMock and VCR look like decent solutions to those issues. You've definitely changed my opinion on the subject overall. I appreciate you taking the time.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett: Glad you found it useful! I'm always happy to swap tips about testing strategies.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

After looking at Capybara 2.0 (it's been a long time since I've checked it out, so many improvements!) It looks like they have decoupled it from the rspec request/integration specs.

Capybara specs now go in /spec/features folder rather than integrating with /spec/requests. Has anyone tested with this new setup yet? Do you maintain both folders and differentiate between the two? More info here: http://rubydoc.info/gems/rspec-rails/file/Capybara.md#Upgrading_to_Capybara-2_0

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett: I recommend very strongly against using RSpec integration and request specs at all, since I believe they function at the wrong level of abstraction. I'd recommend Cucumber for this purpose instead. IMHO plain RSpec (i.e. without Cucumber on top) is really only good for unit testing.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

@marnen that's what I assumed, there just seemed to be some confusion in the rspec docs.

I planned to do all integration testing via Capybara, only using the rspec, rack-based integration tests for things like APIs where you need to assert status codes et al. (as far as I know Capybara doesn't do that, please correct me if I'm wrong).

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett I think you're confusing Cucumber and Capybara. They are two entirely separate gems.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett To reiterate, the point I was making was that, whether you use Capybara or not, RSpec integration specs are a terrible tool, and Cucumber is nicer for that purpose. It can call any Ruby code including RSpec expectations and Capybara. Take a look at the Cucumber features in any of my projects.

from betterspecs.

conzett avatar conzett commented on July 28, 2024

Could you elaborate on them being a bad tool? I'm a bit confused as I thought you were arguing in favor of integration specs in the context of this betterspecs post.

Re: Cucumber, I just prefer the rspec DSL to the Gherkin syntax, I'm quite familiar with how it works (I used SpecFlow on a large project, it's like the .net equivalent to Cucumber)

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@conzett I am arguing in favor of acceptance/integration testing rather than controller unit testing. But those integration tests should not be written in RSpec, which is the wrong tool for them. (Yes, I know it looks like you can write a good acceptance test in RSpec. You can't, really; it's kind of like the problem with controller tests in that such tests will work, after a fashion, but you'll be testing the wrong thing. To write good acceptance tests in RSpec, you basically have to reimplement Cucumber, as detailed below.)

See, when you write integration tests, you're describing UI and user actions, not programmer concepts. That means you need to describe them in user terms, not programmer terms. The only suitable tool I know of for describing the UI in user terms, and still making those descriptions executable, is Gherkin. Whether you prefer the RSpec DSL is immaterial, I think: RSpec works at the wrong level of abstraction to make it a good choice for integration testing.

Some people get around this by writing high-level declarative methods in their RSpec integration tests. That is a particularly pointless approach, I think: it's basically an inferior reimplementation of things that Cucumber/Gherkin already does very well.

In other words: RSpec is superior for unit testing and other programmer-facing stuff, because it functions in programmer terms. Gherkin is superior for integration testing, because it describes the UI in user terms. I'd no more use RSpec for my integration tests than I'd use Cucumber for my unit tests: each DSL is good for a particular purpose, and awkward elsewhere.

from betterspecs.

sdeframond avatar sdeframond commented on July 28, 2024

Fascinating discussion!

@marnen Are you saying that Cucumber is better than Rspec/Capybara
because of the way it describes the UI (the syntax) or because it
technically tests an additional layer?

My understanding was that Capybara was already able to load the HTML
generated by running the whole stack (from DB to views and rack) then
execute the javascript. I am struggling to see what else you can test
with the machine.

Unless I am missing something, I feel like the debate about Rspec's DSL
vs. Cucumber's is topic for another discussion.

On 09/13/2013 06:17 AM, Marnen Laibow-Koser wrote:

@conzett https://github.com/conzett I am arguing in favor of
acceptance/integration testing rather than controller unit testing.
But those integration tests should not be written in RSpec, which is
the wrong tool for them. (Yes, I know it /looks like/ you can write a
good acceptance test in RSpec. You can't, really; it's kind of like
the problem with controller tests in that such tests will /work/,
after a fashion, but you'll be testing the wrong thing. To write good
acceptance tests in RSpec, you basically have to reimplement Cucumber,
as detailed below.)

See, when you write integration tests, you're describing UI and user
actions, not programmer concepts. That means you need to describe them
in user terms, not programmer terms. The only suitable tool I know of
for describing the UI in user terms, and still making those
descriptions executable, is Gherkin. Whether you prefer the RSpec DSL
is immaterial, I think: RSpec works at the wrong level of abstraction
to make it a good choice for integration testing.

Some people get around this by writing high-level declarative methods
in their RSpec integration tests. That is a particularly pointless
approach, I think: it's basically an inferior reimplementation of
things that Cucumber/Gherkin already does very well.

In other words: RSpec is superior for unit testing and other
programmer-facing stuff, because it functions in programmer terms.
Gherkin is superior for integration testing, because it describes the
UI in user terms. I'd no more use RSpec for my integration tests than
I'd use Cucumber for my unit tests: each DSL is good /for a particular
purpose/, and awkward elsewhere.


Reply to this email directly or view it on GitHub
#14 (comment).

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@sdeframond:

Are you saying that Cucumber is better than Rspec/Capybara because of the way it describes the UI (the syntax) or because it technically tests an additional layer?

The former, more or less. The abstraction gained from describing the UI in user terms, not programmer terms, is extremely valuable, and the spec DSL by itself can't give you that.

Of course Cucumber is just calling RSpec and Capybara under the hood, but it's preventing you from thinking directly in Ruby when you're describing UI. This often seems like needless indirection at first, but it's actually essential.

And yes, I agree that this is losing the topic; I was about to say the same thing.

from betterspecs.

margaretdax avatar margaretdax commented on July 28, 2024

I'm all for integration tests (big fan of Cucumber/Capybara) but I feel like this is very out of place in a site giving best practice recommendations. I don't think this should be in there or at the very least it there should just be a nod to the debate and maybe give arguments for both sides. I'd recommend this be a section presenting various viewpoints rather than a "This is the right way and I say so" which is the impression I get when I see no sources and personal language.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@lblackburn I don't know if you're referring to me, but I've made every effort to keep personal opinions out of my recommendations, as I think opinions have little place in software development. I hope I've provided logical justifications for everything I've said.

from betterspecs.

margaretdax avatar margaretdax commented on July 28, 2024

No @marnen, sorry for any confusion, my comment was directed at the content of this section http://betterspecs.org/#integration section

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@lblackburn Cool. Sorry for the misunderstanding.

from betterspecs.

onebree avatar onebree commented on July 28, 2024

Right now, we are at ~70% coverage and rely only on, what I guess to be, integration testing. We use Capybara to test the gui as if an admin/user were clicking through it. We "know" the data successfully saved to the database when it renders within a table. I agree that it is slow. It can also be annoying as Capybara may not find an element, but I can clearly see/use it in development mode.

Any/all validations are presented as success/error messages via our models (sometimes controllers if complex). We then check to make sure the error messages can be found by Capybara.

I also agree that if the UI changes, just as ours just did moving to bootstrap, it requires a complete overhaul of the specs and views.

How would you suggest we manage our testing, then? Continue as integration and change whenever a label/view changes (not too often, manageable), or separate into MVC/routes/etc specs?

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@onebree Your Capybara tests sound OK as far as they go (and I hope you're driving them from Cucumber). But do add some unit specs for the models too -- there are some things that are impractical to test solely from integration tests. (And get into the habit of developing story- and test-first!)

The specs changing if the UI changes is a feature, not a bug, at least to some extent, but there are ways of using helper functions or (if you're not using Cucumber) page objects to make this more manageable.

from betterspecs.

onebree avatar onebree commented on July 28, 2024

@marnen Thank you for the reply on this stale thread!

One of our Capybara tests goes as follows:

context "HAPPY PATHS" do
  #...
  scenario "create a parking lot and check hints count" do
    @tenant.default_location.parking_lots.first.destroy
    click_link "Create Parking Lot"
    fill_in "Name", with: "Parking Lot 4"
    select "Corporate", from: "Workgroup"
    click_button "Submit"
    success_messages ["Parking Lot successfully created."]
    has_no_error_message
    expect(page).to have_css("table#parking_lots tr", count: 3)
    expect(page).to have_css("table#parking_lots tr", text: "Parking Lot 4")
    expect(@tenant.default_location.parking_lots.find_by_name("Parking Lot 4").hints.count).to eq(4)
  end
  # ...
end

If we need to test dependency validations, we create or destroy the dependency immediately before using Capybara to find gui elements.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

I'm not sure what you're attempting to describe to me or get my input on here.

I am sure, though, that you should be using Cucumber for this. Always describe your UI in user terms, not programmer terms.

I also think it's likely that that should be 4-5 scenarios, not one. Notice how many assertions you have; normally that's a sign that you should split it up to have only one or two assertions per scenario.

Sample Gherkin translation:

Feature: (whatever the user goal is)

Scenario: Create parking lot and check hints
  Given I have no parking lots
  And I am on the *** page
  When I fill in the following:
    | name      | Parking Lot 4 |
    | workgroup | Corporate     |
  And I click "Submit"
  Then I should be on the *** page
  And I should see "Parking Lot successfully created."
  And I should see 3 parking lots
  And I should see "Parking Lot 4" within a parking lot
  # not translating the last assertion, because it should go in a unit test or be tested through the GUI

from betterspecs.

onebree avatar onebree commented on July 28, 2024

Okay. I wanted to ask if the type of tests we write are enough. I do agree that Cucumber is better for testing the GUI, but I was not here yet when they decided. The reason they chose RSpec, I guess, is to have the ability to test models, such as in the last assertion. They also decided to only test the gui through features specs, as it seemed to be the most important thing to test.

Basically all tests were written after MVC. I was just trying to get help from this discussion as it is about testing what you see/integration testing.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

Okay. I wanted to ask if the type of tests we write are enough.

I'd say it's unlikely, but not absolutely impossible. How do you test complex model logic?

The reason they chose RSpec, I guess, is to have the ability to test models, such as in the last assertion.

Then clearly, they didn't know enough to make a useful decision: Cucumber steps are generally implemented in RSpec, so that would have been no problem.

They also decided to only test the gui through features specs, as it seemed to be the most important thing to test.

Yes, if you have to only have one kind of test for some reason, that is generally the right kind, because it can test everything, whereas unit tests cannot. But unit tests are often useful anyway; it's impractical to set up an integration test for every permutation of complex model logic.

Basically all tests were written after MVC.

...which could explain the abysmal coverage numbers. Are they at least doing all new development test-first?

from betterspecs.

Fryie avatar Fryie commented on July 28, 2024

"Test what you see" seems odd to me. In fact, users don't "see" your models either. Yet still you are not advocating that we only write acceptance tests?

The most important problems with acceptance tests are:

  • they don't scale (I cannot acceptance-test EVERY path through my code)
  • they are slow (even if Rails controller tests are slow themselves, they're still slower than an acceptance test - especially on a JS-heavy website)

Of course I write acceptance tests for every user story. But what about error handling? What about "my redis connection died and I cannot enqueue this job"? What about "the third-party API is down and I cannot process payments"? What about invalid or malicious user input, etc.? Testing all of these cases (which may well appear on multiple parts of the site) through the UI doesn't seem practical to me.

If you say that there is no value in unit testing controllers you're basically saying that controllers are not meaningful units within your code (either that, or you're against unit testing in general).

The controller's responsibility is a) to take input from the user and pass it to the app and b) to parse the response from the app and deal with it in a way that makes sense to a user (either by returning some JSON in an API or by setting up instance variables for a view). That's enough responsibility that a unit test makes sense at least in some cases.

Also, on another note, "skinny controller, fat model" is an antipattern IMHO. The more reasonable approach is "skinny everything". And just because classes are skinny, doesn't mean there's no value in unit testing them.

To each their own, I guess, but I see no a priori reason why unit testing controllers should considered to be bad.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@Fryie:

"Test what you see" seems odd to me.

Why? It is the only way to know that your application works as expected for the user.

In fact, users don't "see" your models either. Yet still you are not advocating that we only write acceptance tests?

Acceptance tests are the most important tests, because they are the only tests that can show that the application is working as a whole.

I write unit tests to make refactoring easier and to increase the number of execution paths I can test, not because I think they will tell me anything about whether the application as a whole works properly. Unit tests -- even controller unit tests -- cannot give you any assurance at all that the application works as desired for the user. Only acceptance tests can do that.

The most important problems with acceptance tests are:

they don't scale (I cannot acceptance-test EVERY path through my code)

This right here is a big reason to have unit tests. The other big reason is that it's often more efficient to test at a smaller granularity before running acceptance tests.

However, don't delude yourself into thinking that you can feasibly unit-test every path through your code either. That quickly runs into huge combinatorial explosions. C0 plus some common C1 use cases is usually as good as you're going to get.

they are slow (even if Rails controller tests are slow themselves, they're still slower than an acceptance test - especially on a JS-heavy website)

They can actually be quite fast if you use a headless browser. But I'd rather my tests be correct than fast.

Of course I write acceptance tests for every user story. But what about error handling? What about "my redis connection died and I cannot enqueue this job"? What about "the third-party API is down and I cannot process payments"?

Acceptance tests are ideal for this. Utilities such as WebMock are great for simulating error conditions caused by external services.

What about invalid or malicious user input, etc.?

You can use acceptance tests for that.

Testing all of these cases (which may well appear on multiple parts of the site) through the UI doesn't seem practical to me.

It's quite practical. I do it on a regular basis. Read my earlier posts in this thread for suggestions on how.

Any test that simulates user input should involve the UI, I think.

If you say that there is no value in unit testing controllers you're basically saying that controllers are not meaningful units within your code (either that, or you're against unit testing in general).

I think you may have responded to this thread without actually reading it, because I already answered this above. Briefly put, while a controller may be a meaningful unit, in a proper Rails application, the controller should be nothing more than a tiny piece of brain-dead glue code. All the logic should be in models and service objects, so you can't meaningfully unit-test your controllers without mocking all their collaborators -- and at that point, you're testing your mocks, not your controllers.

If your controllers have enough logic in them to benefit from unit tests, then you should refactor them.

Am I against unit testing in general? No. But I don't think it's anywhere near as important as acceptance testing. I'd be willing to omit unit tests for simple models too, if I had suitable acceptance tests (though I don't tend to do this in practice).

The controller's responsibility is a) to take input from the user and pass it to the app

Yes.

and b) to parse the response from the app and deal with it in a way that makes sense to a user

Not as you stated. Marshaling data for the view is the controller's responsibility, but the parsing and processing of user input is supposed to take place in the model layer.

That's enough responsibility that a unit test makes sense at least in some cases.

And that's what's wrong with your controllers. You should create model methods for the processing. Then the controller consumes the results of those methods and passes them to the view.

Also, on another note, "skinny controller, fat model" is an antipattern IMHO. The more reasonable approach is "skinny everything".

"Fat model" as I interpret it means that the model layer is fat. Of course no one class should be fat. But almost nothing should happen in the controller layer. The controller just brainlessly passes data from model to view, while all the logic is in the model layer.

That's what "skinny controller, fat model" really means, I believe. And that is proper Rails MVC, not an antipattern.

And just because classes are skinny, doesn't mean there's no value in unit testing them.

If a class does nothing but pass data back and forth between its collaborators, and contains no logic of its own, then there is nothing to unit test that's meaningful.

And that is what a proper Rails controller should look like.

To each their own, I guess,

Software engineering ought to be based on demonstrable facts. I do not believe that "to each their own" is a phrase that has any place in this discipline.

but I see no a priori reason why unit testing controllers should considered to be bad.

Because it's unnecessary, it's painful to implement, it verifies the wrong things, and it's only useful if you're doing MVC wrong. Add those facts up and the conclusion is quite clear: it's a waste of time and effort. The time would be better spent refactoring logic into the model layer and writing acceptance tests.

from betterspecs.

sdeframond avatar sdeframond commented on July 28, 2024

Relevant: a very interesting
discussion
between DHH,
Martin Fowler and Kent Beck about what to test and why.

On 05/05/15 23:49, Marnen Laibow-Koser wrote:

@Fryie https://github.com/Fryie:

"Test what you see" seems odd to me.

Why? It is the only way to know that your application works as
expected for the user.

In fact, users don't "see" your models either. Yet still you are
not advocating that we only write acceptance tests?

Acceptance tests are the most important tests, because they are the
only tests that can show that the application is working as a whole.

I write unit tests to make refactoring easier and to increase the
number of execution paths I can test, not because I think they will
tell me anything about whether the application as a whole works
properly. Unit tests -- even controller unit tests -- cannot give you
any assurance at all that the application works as desired for the
user. Only acceptance tests can do that.

The most important problems with acceptance tests are:

they don't scale (I cannot acceptance-test EVERY path through my code)

This right here is a big reason to have unit tests. The other big
reason is that it's often more efficient to test at a smaller
granularity before running acceptance tests.

However, don't delude yourself into thinking that you can feasibly
unit-test every path through your code either. That quickly runs into
huge combinatorial explosions. C0 plus some common C1 use cases is
usually as good as you're going to get.

they are slow (even if Rails controller tests are slow themselves,
they're still slower than an acceptance test - especially on a
JS-heavy website)

They can actually be quite fast if you use a headless browser. But I'd
rather my tests be correct than fast.

Of course I write acceptance tests for every user story. But what
about error handling? What about "my redis connection died and I
cannot enqueue this job"? What about "the third-party API is down
and I cannot process payments"?

Acceptance tests are ideal for this. Utilities such as WebMock are
great for simulating error conditions caused by external services.

What about invalid or malicious user input, etc.?

You can use acceptance tests for that.

Testing all of these cases (which may well appear on multiple
parts of the site) through the UI doesn't seem practical to me.

It's quite practical. I do it on a regular basis. Read my earlier
posts in this thread for suggestions on how.

Any test that simulates user input should involve the UI, I think.

If you say that there is no value in unit testing controllers
you're basically saying that controllers are not meaningful units
within your code (either that, or you're against unit testing in
general).

I think you may have responded to this thread without actually reading
it, because I already answered this above. Briefly put, while a
controller may be a meaningful unit, in a proper Rails application,
the controller should be nothing more than a tiny piece of brain-dead
glue code. All the logic should be in models and service objects, so
you can't meaningfully unit-test your controllers without mocking all
their collaborators -- and at that point, you're testing your mocks,
not your controllers.

If your controllers have enough logic in them to benefit from unit
tests, then you should refactor them.

Am I against unit testing in general? No. But I don't think it's
anywhere near as important as acceptance testing. I'd be willing to
omit unit tests for simple models too, /if/ I had suitable acceptance
tests (though I don't tend to do this in practice).

The controller's responsibility is a) to take input from the user
and pass it to the app

Yes.

and b) to parse the response from the app and deal with it in a
way that makes sense to a user

Not as you stated. Marshaling data for the view is the controller's
responsibility, but the parsing and processing of user input is
supposed to take place in the model layer.

That's enough responsibility that a unit test makes sense at least
in some cases.

And that's what's wrong with your controllers. You should create model
methods for the processing. Then the controller consumes the results
of those methods and passes them to the view.

Also, on another note, "skinny controller, fat model" is an
antipattern IMHO. The more reasonable approach is "skinny
everything".

"Fat model" as I interpret it means that the model /layer/ is fat. Of
course no one class should be fat. But almost nothing should happen in
the controller layer. The controller just brainlessly passes data from
model to view, while all the logic is in the model layer.

That's what "skinny controller, fat model" really means, I believe.
And that is proper Rails MVC, not an antipattern.

And just because classes are skinny, doesn't mean there's no value
in unit testing them.

If a class does nothing but pass data back and forth between its
collaborators, and contains no logic of its own, then there is nothing
to unit test that's meaningful.

And that is what a proper Rails controller should look like.

To each their own, I guess,

Software engineering ought to be based on demonstrable facts. I do not
believe that "to each their own" is a phrase that has any place in
this discipline.

but I see no a priori reason why unit testing controllers should
considered to be bad.

Because it's unnecessary, it's painful to implement, it verifies the
wrong things, and it's only useful if you're doing MVC wrong. Add
those facts up and the conclusion is quite clear: it's a waste of time
and effort. The time would be better spent refactoring logic into the
model layer and writing acceptance tests.


Reply to this email directly or view it on GitHub
#14 (comment).

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@sdeframond: ooh, that's fascinating (though I wish they'd transcribed it -- I have more time to read than to watch video)! I'll note, however, that DHH apparently thinks hexagonal Rails is a bad thing, while I think it's a path to better use of Rails MVC. (In general I don't think much of DHH these days -- he did a great job with early Rails, but can't seem to see beyond it.)

from betterspecs.

Fryie avatar Fryie commented on July 28, 2024

Software engineering ought to be based on demonstrable facts. I do not believe that "to each their own" is a phrase that has any place in this discipline.

Something does not become a science simply by someone stating that it is. If you want demonstrable facts, please cite extensive peer-reviewed studies, not just personal anecdotes with thousands of possible confounders. Failing that, I suppose that yes, it is indeed a matter of opinion.

I think we have different ways of writing both applications and tests in general. Most importantly, I do not write tests to verify correctness. In fact, in almost all cases I am much quicker verifying correct behaviour by hand.

The reason I test is to catch regressions, and for that purpose unit tests are as valuable as integration tests. Of course unit tests can be brittle, but so can integration tests ("whoops, I changed the post login message, now I have to rewrite my tests"), so it's all a question of writing good tests anyway.

Not as you stated. Marshaling data for the view is the controller's responsibility, but the parsing and processing of user input is supposed to take place in the model layer.

I disagree. My models should not have to know that they are part of a Rails app (it's bad enough that they're tightly coupled to ActiveRecord, i.e. to the database). I should be able to take my models and reuse them somewhere else; that's not just theory, I've done that before.

The controller on the other hand is the boundary, the entity that translates web requests into business requests and business responses into web responses (including setting up instance variables for the view, rendering a JSON representation, etc). IMHO, this is similar to the approach discussed by Uncle Bob in Architecture: The Lost Years.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@Fryie:

Something does not become a science simply by someone stating that it is.

...which I didn't do. I merely said that we should take an evidence-based approach to software engineering, not that it was a science.

Most importantly, I do not write tests to verify correctness. In fact, in almost all cases I am much quicker verifying correct behaviour by hand.

What do you write tests for, then? (Yes, I see you said regressions, but is that the only purpose?) And what do you mean by "correctness" here? (I can think of at least two meanings that make sense in this context.)

You're right that it's generally quicker to run a one-off test by hand than to automate it. However, refactoring and other maintenance activities require repetition of the same tests over and over, to make sure that nothing breaks during the process. And that's where automation comes in.

Also, think about this: the way you make sure your application is correct is by catching regressions. And catching regressions has no purpose except to ensure your application is correct. So what do you mean when you say you don't write tests to ensure that the application is correct? What would that kind of test be, to you?

My models should not have to know that they are part of a Rails app

Agreed. That is completely orthogonal to the idea that the model layer should be where data processing happens. The model can process data received from the controller without knowing where it came from or where it's going -- and it should. The controller should call model methods and consume the return values.

The service objects, OTOH (which are part of the model layer but somewhat different) generally probably do know that they're part of a Rails app, and have less reusability.

The controller on the other hand is the boundary, the entity that translates web requests into business requests and business responses into web responses

That is how Reenskaug-style MVC is designed, as used in Smalltalk and in Cocoa. It is emphatically not how Rails MVC (originally called Model2 MVC, from the name of an earlier implementation of this pattern) is intended to work. There is more than one MVC architecture paradigm out there, and the term is thus a bit overloaded.

Reenskaug MVC doesn't work very well for server-side Web applications, or so I understand. It's unfortunate that the related but different paradigm for Web applications is called by the same name even though it's not the same thing.

The controller in Reenskaug MVC is an intelligent object, responsible for quite a lot of stuff (and thus it makes sense to unit-test it). The controller in Rails MVC should be as dumb as you can possibly make it, simply delegating to intelligent classes in the model layer.

Right now it sounds like you're trying to do Reenskaug MVC in a framework that is designed for a different pattern altogether.

from betterspecs.

sdeframond avatar sdeframond commented on July 28, 2024

By the way, does any of you know how to do proper integration tests with
asynchronous jobs? I am working with Sidekiq now but information about
other frameworks would be interesting too.

I haven't found better than to test that 1) the job was correctly
enqueued, then 2) the job does what it should when run. This is kind of
a mixed approach between unit and integration testing. I whish I could
write end-to-end integration tests though.

What do you think?

-Sam

On 06/05/15 18:39, Marnen Laibow-Koser wrote:

@Fryie https://github.com/Fryie:

Something does not become a science simply by someone stating that
it is.

...which I didn't do. I merely said that we should take an
evidence-based approach to software engineering, not that it was a
science.

Most importantly, I do not write tests to verify correctness. In
fact, in almost all cases I am much quicker verifying correct
behaviour by hand.

What do you write tests for, then? (Yes, I see you said regressions,
but is that the only purpose?) And what do you mean by "correctness"
here? (I can think of at least two meanings that make sense in this
context.)

You're right that it's generally quicker to run a one-off test by hand
than to automate it. However, refactoring and other maintenance
activities require repetition of the same tests over and over, to make
sure that nothing breaks during the process. And that's where
automation comes in.

Also, think about this: the way you make sure your application is
correct /is/ by catching regressions. And catching regressions has no
purpose except to ensure your application is correct. So what do you
mean when you say you don't write tests to ensure that the application
is correct? What would that kind of test be, to you?

My models should not have to know that they are part of a Rails app

Agreed. That is completely orthogonal to the idea that the model layer
should be where data processing happens. The model can process data
received from the controller without knowing where it came from or
where it's going -- and it should. The controller should call model
methods and consume the return values.

The service objects, OTOH (which are part of the model layer but
somewhat different) generally probably /do/ know that they're part of
a Rails app, and have less reusability.

The controller on the other hand is the boundary, the entity that
translates web requests into business requests and business
responses into web responses

That is how Reenskaug-style MVC is designed, as used in Smalltalk and
in Cocoa. It is emphatically /not/ how Rails MVC (originally called
Model2 MVC, from the name of an earlier implementation of this
pattern) is intended to work. There is more than one MVC architecture
paradigm out there, and the term is thus a bit overloaded.

Reenskaug MVC doesn't work very well for server-side Web applications,
or so I understand. It's unfortunate that the related but different
paradigm for Web applications is called by the same name even though
it's not the same thing.

The controller in Reenskaug MVC is an intelligent object, responsible
for quite a lot of stuff (and thus it makes sense to unit-test it).
The controller in Rails MVC should be as dumb as you can possibly make
it, simply delegating to intelligent classes in the model layer.

Right now it sounds like you're trying to do Reenskaug MVC in a
framework that is designed for a different pattern altogether.


Reply to this email directly or view it on GitHub
#14 (comment).

from betterspecs.

Fryie avatar Fryie commented on July 28, 2024

While there are options to inline sidekiq so the jobs are run directly (see the docs), I think it's not a particulary good idea.

Basically there are several (not exclusive, but complementary strategies):

  • Test both components in isolation (the enqueuing and the worker itself) - which you already do
  • Test the contract between the two (e.g. with something like pacto)
  • Have a dedicated test environment that loads both components and runs an end-to-end test

The last option is very expensive (and sometimes not feasible, e.g. if you have a delay for the worker), so IMHO it should be restricted to very important cases.
I think this presentation by Martin Fowler about microservices testing is interesting in this regard.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@sdeframond:

By the way, does any of you know how to do proper integration tests with
asynchronous jobs?

That's a really interesting question. I'm trying to remember how I've handled this in the past.

I think I would tend to have the job run synchronously in the testing environment; otherwise there's really no way to follow the whole process through. Or (mostly equivalent) somehow get a promise from the enqueued job that resolves when it's done, and test that.

It seems to me that this is the approach I've taken in the past: run synchronously if Rails.env.test?. But I don't have access to that code now to check.

Another approach would be to treat the queue like an external service: mock the results of the enqueued job, then test the job logic elsewhere.

I'm not sure which of these I like better. They all have problems. I'd tend to decide on a case-by-case basis.

from betterspecs.

tgaff avatar tgaff commented on July 28, 2024

Considering how contentious this rule is; I really think it should be removed from the guide as a recommendation.
If the goal of betterspecs is to suggest guidelines based on the collected knowledge, experience and general consensus of RSpec users (on the best ways to use RSpec); then it doesn't seem that we're serving that goal by listing something with so little agreement.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@tgaff The principle isn't controversial. You'll notice that most of the disagreement comes from people who are new to RSpec, not from experienced RSpec users.

Besides, if it were obvious, there would be no need to recommend it, would there?

from betterspecs.

brunofacca avatar brunofacca commented on July 28, 2024

IMO this guideline should be updated.

Now I only create integration tests using RSpec and Capybara.

It should probably say RSpec and/or Capybara. I don't think feature specs with Capybara can replace controller specs and/or request specs in most cases. Even if they could, it would result in a slow and likely brittle a test suite.

For instance, Capybara is not suited for testing APIs. You should probably mention request specs, which are suited for API testing and widely used in integration tests as of Rails 5.

The RSpec 3.5 release notes say:

For new Rails apps: we don't recommend adding the rails-controller-testing gem to your application. The official recommendation of the Rails team and the RSpec core team is to write request specs instead. Request specs allow you to focus on a single controller action, but unlike controller tests involve the router, the middleware stack, and both rack requests and responses. This adds realism to the test that you are writing, and helps avoid many of the issues that are common in controller specs. In Rails 5, request specs are significantly faster than either request or controller specs were in rails 4, thanks to the work by Eileen Uchitelle1 of the Rails Committer Team.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@Fryie:

Are [controllers] supposed to cover business logic?

Explicitly not. That's the model's job, and always has been as far as Rails MVC is concerned. This is nothing new.

If your controllers are extremely dumb (i.e. they do exactly zero things before piping everything into, say, a Trailblazer operation), then by all means don't unit test them.

That's how I strive to structure my controllers (and usually succeed). Anything complex goes into a model or a service object (which is really a special kind of model).

But that's probably not how 95% of Rails projects are set up.

You may be right, but "skinny controller, fat model" has long been the Rails ideal. Therefore, if you're doing Rails right, controller unit tests are generally not meaningful. If you need controller tests, you're probably doing Rails wrong by making your controllers too fat.

As soon as there is even just one strong parameters method in there, you basically have parameter validation, and then that's logic that should be covered in your tests.

...which is why I hate strong parameters. That kind of logic belongs in the model layer. (As I've said elsewhere in this thread, I basically proxy my strong parameter logic to the model layer in my applications. That way I work with, rather than against, the framework, but still keep the data validation logic in my models.)

@brunofacca:

I don't think feature specs with Capybara can replace controller specs and/or request specs in most cases.

They can (especially with Cucumber on top) and should, if the controllers are sufficiently skinny. (If they're not sufficiently skinny, that's a problem to fix, not to palliate with more tests.)

For instance, Capybara is not suited for testing APIs. You should probably mention request specs, which are suited for API testing and widely used in integration tests as of Rails 5.

For API testing, I very highly recommend the https://github.com/zipmark/rspec_api_documentation gem.

from betterspecs.

brunofacca avatar brunofacca commented on July 28, 2024

@marnen:

They can (especially with Cucumber on top) and should, if the controllers are sufficiently skinny. (If they're not sufficiently skinny, that's a problem to fix, not to palliate with more tests.)

I'm referring to applications with extremely skinny controllers. Let me rephrase: Feature specs with Capybara may replace controller/request specs, but it will result in a slower test suite. Additionally, finding the problems behind test failures should take longer (lower-level tests help to pinpoint issues) and the fact that your tests rely on DOM elements will likely increase brittleness.

For API testing, I very highly recommend the https://github.com/zipmark/rspec_api_documentation gem

I did not know that gem. Looks great, thank you for the recommendation.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@brunofacca:

Let me rephrase: Feature specs with Capybara may replace controller/request specs, but it will result in a slower test suite.

So what? The tests will be more correct, especially if the controllers are skinny.

finding the problems behind test failures should take longer (lower-level tests help to pinpoint issues)

That's why we have model unit tests. If the controllers are sufficiently skinny, there is essentially nothing in them that will break, so there is nothing in them to test.

and the fact that your tests rely on DOM elements will likely increase brittleness

The fact that the tests rely on DOM elements increases correctness. I want my tests to break when the DOM changes significantly, because the DOM is what the user interacts with.

from betterspecs.

brunofacca avatar brunofacca commented on July 28, 2024

@marnen:

So what? The tests will be more correct, especially if the controllers are skinny.

IMO you can't enjoy the full benefits of the TDD process with a slow feedback loop. Also, if tests take too long to run, it is easier to loose focus. I believe tests can be correct and fast at the same time.

The fact that the tests rely on DOM elements increases correctness. I want my tests to break when the DOM changes significantly, because the DOM is what the user interacts with.

The DOM may not have to change significantly to break your tests.

Anyway, I suggest we agree to disagree, as this seems like one of those discussions that can go on forever.

from betterspecs.

marnen avatar marnen commented on July 28, 2024

@brunofacca:

IMO you can't enjoy the full benefits of the TDD process with a slow feedback loop.

That's true to a point, which is why we use the model unit tests as the inner loop. Then we use the feature/acceptance tests as an outer loop if they're too slow to run all the time.

Personally, though, I am so convinced of the benefits of test-first development that I'm willing to be pretty patient for slightly slower but correct tests to run.

Also, if tests take too long to run, it is easier to loose focus.

I use Guard and other tools that help track that focus for me.

I believe tests can be correct and fast at the same time.

That's the ideal, but if I have to choose one of the two, I will choose correct over fast nearly every time—within reason: I don't recompile the Ruby interpreter or reinstall my gems for every test, after all. :)

The DOM may not have to change significantly to break your tests.

I want my tests to break if the DOM changes in such a way that the user would have a different experience. If they do not do so, then the tests are not sufficiently exercising the UI.

OTOH, if the tests break frequently when the DOM changes in trivial ways, then the tests are too picky. But it's easy to write tests that work through the UI and are just picky enough. The way to do that is to write tests that think like a user: look for display text, not classes or IDs, don't pay too much attention to element order if the user wouldn't, and so on.

Anyway, I suggest we agree to disagree

I don't do that; I think it's sloppy. This is a field where facts are verifiable, which means that if we disagree, then probably at least one (more likely both) of us has more to learn.

from betterspecs.

brunofacca avatar brunofacca commented on July 28, 2024

@marnen:

I don't do that; I think it's sloppy. This is a field where facts are verifiable, which means that if we disagree, then probably at least one (more likely both) of us has more to learn.

The fact that facts are verifiable in this field does not stop some opinion-based discussions from going on "forever", such as: a) The mockists vs. classicists discussion; b) Extracting model code to concerns vs. service objects in Rails (DHH defends the former); c) Which programming language is best suited for a specific task. I believe our attachment to specific points of view and the fact that most people don't like being proven wrong (even though it provides a great opportunity for learning) tends to skew our perception/interpretation of verifiable facts. I believe they call it "confirmation bias". Anyway, that's a whole other (philosophical) discussion.

I agree that, as long as we enter this kind of technical "debate" with an open mind, we can learn a lot. I sure learned from our discussion, as I'm relatively new to testing and striving to learn as much as I can about the "best practices". In that spirit, thanks for a good talk :)

from betterspecs.

aesyondu avatar aesyondu commented on July 28, 2024

I don't want to add to this discussion again, but to state that this principle is not controversial is ... quite a statement. I also just watched a screencast by Gary Bernhard yesterday, where he refactored a big controller test - in the end the controller spec was smaller and more isolated but it was still kept in place.

Not sure if it's just me, but the link provided now resolves to a porn site, http://solnic.eu/.*. So be WARNED.

from betterspecs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.