Jump to content

Using llSleep to reduce Script Time


Vulpinus
 Share

You are about to reply to a thread that has been inactive for 3311 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I can concur with  a lot of that.

My first proper contract after leaving college was to computerise everything in a large manufacturing plant, except accounts who wanted to keep their minicomputer. The firm was quite laid back and we did it basically by me sitting in the various departments asking the people who were doing the jobs what they needed, following their work patterns, and putting code together in the afternoons. We phased the systems in department-by-department in parallel with their paper systems, adjusting things and adding "hey, can you make it do this as well?" things as we went along. Completely not what my books on systems analysis had told me to do.

At the end of a year, the stock ordering, automated production planning, QC, real-time order processing, warehousing and transport planning were running smoothly and doing just what they all needed. This was in the days of 8088/8086 PCs with a 10base2 ethernet network and MS-DOS 3.3, so you can imagine the work involved in writing a multiuser software suite from scratch. 100% success - the suite was in use for over a decade.

A few contracts later, I landed one with the UK's NHS to develop a three-year-forecasting system for their IT systems' growth in the hospitals (and even new hospitals being built), to aid planning annual budgets. The systems they wanted me to model and forecast for were minicomputer networks, and they had written very detailed specifications for the competing contractors, right down to the user interface for the software which was to run on a PC, now with an 80286 CPU. Wow - the power!

The system required large amounts of data to be entered monthly in order to keep the forecast accurate for a rolling three year period. I told them their proposed interface was too hard for people to use, and they should let me design a simpler one; perhaps even try to link with the minicomputers and automate much of it.

They refused to listen, despite which they offered me the contrract (I undercut everyone else). I needed the money, so I took it and wrote things to their specifications. My model worked brilliantly; I found out several years later that it had been accurate for all that time... right up until they stopped entering the monthly data. Both of the staff I trained to use it had left, and no one else could handle it even with the detailed documentation I wrote. "I told you so".

The customer might think they know what they want, but it isn't always what they need and all the specs in the world won't help in that case.

 

Link to comment
Share on other sites

i add some more to the convo

i dont disagree that there are difficulties of spec driven implementations. They can be a real pain sometimes. The downside with them (as Maddy also says) is that often the customer doesnt know exactly what they want

what I have learned tho is that they do know what the desired outcome is. What the outputs of the software should be

is the same when people get a new house built, or get their hair cut even. They do know what they would like the outcome to be. What they dont know are the implications of the details of construction and how that affects the presentation aspects of software, The human/machine interface. Data capture and reporting

+

in a business looking to add ICT, then the question is: Who is the user of the ICT software?

sometimes people will say the user is the staff. Is not true this in a business. The user is the business. The business uses the outcomes of the software to help it to grow. The constructed software helps the staff to realise the business outcome

+

so in my org

we start with the business outcome and work back toward the staff. The staff desires, even tho they (and me as well) use the software/devices every day, are the least important of the two. Staff are not like retail customers buying for their own personal use. Staff dont get to choose the tools. They are trained to use the tools provided, to help them realise the business outcome. Which is what the business pays them to do

when a staffer goes: Why does my device work this way? Then can give them a good reason for it. The business outcome reason. When they do get this reason then they go: oh! ok. makes sense to me. Then the chat becomes about training if they need it

+

what I do is ensure that the outcomes are heavy specc'd. Heavy spec the reporting component

the software must report on these outcomes. It must include these datapoints and they must be presented in this way. Is not a option. It must

then work back toward the UI. With the device UI I pretty much only ensure that the datapoints that must be captured are speccd/detailed in the requirement docs. Then use the iterative/consultative approach you (Vulpinus and Maddy) mention to build that part. Is the best way to do this for all the reasons you mention

while staff dont get to choose the tools, they do get a say in how they are to be used. The UI design has huge influence on how a tool is used. Which is what staff are most interested in. How does this tool work? How can I use it to do my job, neatly and quickly

what I wont do tho is allow the UI designers to introduce stuff that would result in a situation where the datapoints have to be altered or omitted, just so the staff can have a better simpler workflow on the device. Simple workflows are highly desirable for both staff and the business, but not at the expense of the business outcome itself

+

just add on here the approach we take to building ICT. It wont work for every business/org but might give people in the same situation as us some ideas

we start with the funders contract. What is it they want to know exactly?

we have a contractor provider who specialises in this. They identify every datapoint then they mock up the reports. They are not programmers. They are analysts. And we go over and over and over these. Working them all thru with our analysts and the funders analysts as well. Over and over til we all agreed that yes this is exactly what the funder requires. Then get sign off. Is cast in stone

+

then bring in the backend contractor. They are expert at sec databases. They take the mockups and work out how these can be integrated into our existing systems, what changes need to made on their end, the dbAPI, etc. Is a No.1 rule: The reports are not going to change. Not ever. bc they exactly give the funder what they require. The backend guys do whatever it takes to make this happen. Given this table structure, populated with this data, and these querys, then the reports can be produced

they then build (mod from existing) a desktop interface for it. A stock standard data entry and reporting interface. Nothing flash. The analysts then use it to build the reports and eval integrity

going forward this UI is also used by our back office staff to produce the funders reports. It has data entry/edit capabilities. So that any mistakes that field staff make can be corrected before the reports are submitted. It also helps identify the field staff who are struggling to enter data correctly. Further training can be provided to them so that they can get better at doing this themself

+

then onto the device UI, sec transmission/exchange

another crew that specialises in this. Is pretty fluid spec-wise this part, except for the hard proviso of datapoints. Is the No. 2 rule: the db is not going to change, nor is the dbAPI. Not ever. It is what it is. bc it has to work the way it does so that the funder gets their reports exactly as they require

+

we compartmentalise the process. report analysis : backend : frontend

at each stage we have something we can use. If there is a massive unaccounted for problem at either the backend or the frontend stage then we can stop and it doesnt waste the effort and cost that went into the earlier stages

eg. If the backend database doesnt get implemented then we have the report templates (mocked up containing exactly what it is the funder wants to know). The data can be obtained from the field staff, hand collated (entered into a spreadsheet) and produce the report, detailed as required

if the back end is done and the front not, then can data entry direct into the db and then run the reports

both the backend and frontend teams know what the hard limits are before they start. For db: this report. For dUI: this database. The report is not going to change. Nor is the dbAPI

+

is my own pov/experience that this is the best way to do this kinda work. Start with the business outcome, define exactly what that is and then work backwards toward the staff

if do it the other way (which some companies/orgs do) then you likely to end up with nothing. Mostly bc staff (same like retail customers) have lots of different expectations of what the outcome is. The outcome for them

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3311 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...