One option I have tried in the past is having a super operation with a large interface for supporting all states, then the application handles the state variability within the application code. However, this makes the interface confusing for consumers as it quickly becomes unclear which attributes apply to which state.
Another option, which I have been rather hesitant of pursuing, could involve separate operations for each state with concrete contracts (i.e. an API path /my-service/ca/my-operation). This seems like it would introduce a maintainability issue within the code to support 50 operations and lead to code redundancy.
Curious if others have encountered a similar problem.
Some other things I would do:
- Simplify API: (/service/${STATE})
- Create a dictionary of ${STATE} to RequestHandler's for all cases, and a default case.
- Divide as much functionality into reusable stateless functions.
- Compose RequestHandler's with reusable functions.
Now when the server receives a request with ${STATE} it can look up the appropriate handler in the dictionary and process the request.
I'm also concerned that they'll randomly go and swap out their systems, probably poorly (eg fbo.gov just switched to beta.sam.gov, half-cocked with missing features everywhere, and an underspecified and randomly updated API)
So I figure 50% of this thing is going to be workarounds, but some of it shared, so the plan was to just map m:1 localities to functions, and slowly grow a little library of shared logic.
When they do their nonsense and randomly change their API behavior, just fiddle with the dict and give them their own function.
The key though being that I don't expect any of these guys to stay stable, or even sane, so there's not much hope for anything elegant -- the best you can hope for is to reduce the boilerplate.
Worst case you have 450 unique functions, but as long as each one is simple enough it's not a big deal for maintainence. The main question then is to actively watch for things changing under your feet, and to fail-fast -- if it's not exactly what you expect, fail, report it and get it fixed, because I'm definitely not watching 450 changelogs, which probably don't exist anyways
However, in my case we found a much nicer 90% solution -- our main interest was filtering notices, and everyone supports email... So instead we just made a big inbox, setup some rules for initial filtering and pulled the emails to regex filter and slapped Gmail labels onto them.
Finally toss some humans to read the much shorter filtered list of email to determine relevancy and extract any details and push it forward.
Not fully automated... But a couple days work to implement, and good enough, and we'll see if we ever actually need something more legitimate down the road
have a big switch statement to control business logic in service
Do you want to manage them as a single application and code base or 50?