Why free software?
I *could* sell it, but I'm not going to until it's ethical to.
tl;dr
I do want to sell a decision engine. Saturn V won't be the one since it's a research project first. Aside from the nuances of software licensing, organization of startup companies, the possibility of "selling out" someday, or even potential copyright infringement, the decision engine does need to be free software. I feel the need to clarify my position on this in order to set realistic expectations to myself and others for Saturn V and my future decision engine projects.
Why monetize?
There are many people that are not willing or able to run a piece of always-on software on a home server and there is an opportunity to provide personal decision engines as a cloud-based subscription service. I believe that the exchange of providing that service in return for monetary compensation for the development of decision engine software isn't just a justifiable action but one that does a lot of overall good for the world. Monetary compensation assures a long lifespan for the project and by providing the engine as a service, many people would be helped by the engine that otherwise may not be able to use it.
Free software and decision engines
At the same time, I do have the strong conviction that decision engines can only truly succeed at their job as free software.
For starters, a decision engine would process a lot of personal data on its users, more than any feasible user-tracking platform would be able to collect through data analytics. This in of itself is plenty of reason to me to convince me that no matter what, decision engines should be free software and that the specifics of what is done with user data is public knowledge.
Next, I believe that the privatization of decision engines would limit a user's decision-making capabilities at worst and directly exploit a user's disability or maladaptiveness at worse. For instance, a privatized company creating a commercial decision engine could do the same things that privatized tech companies always do: paywall crucial features, prioritize addictive app design over user agency, and target advertisements. Rhetorically speaking, one can (in bad faith) trivialize the exploitation of Big Tech users by downplaying the negative effects of social media on mental health, denying users' rights to privacy, or villainizing or victim-shaming the "internet-addicted." However, you cannot make any such defenses of a decision engine utilized in that same bad faith. Decision engines are prosthetics; it is undeniable that any benefit to the user is very personal and necessary and the exploitation of that benefit is unethical.
In the case that this hypothetical company is not being explicitly predatory, the privatization of the decision engine still has negative consequences on its users. A centralized development team can only acknowledge so many user needs compared to a decentralized free software community. The ability to modify the source code of the decision engine to add important features is really critical to the high customization demand the engine requires. Limiting or removing that ability to modify the decision engine undermines one of its core features.
also
GPL rocks.1
I didn't know how to properly conclude this.