Category: Uncategorized

  • The Advantages of Customised Generative AI System for Fast AI Adoption

    The Advantages of Customised Generative AI System for Fast AI Adoption

    OpenAI has brought about incredible awareness of what large language models (LLMs) and AI, in general, are capable of. Some organisations then trained LLMs in the hope of observing impressive ChatGPT-like capabilities on their own organisational data.

    However, such training and serving a massive LLM requires a large investment. Specialisation of massive LLMs incurs higher economic and environmental costs as it utilises more powerful, larger and costlier models than necessary for simpler business use cases.

    In addition, this approach often lacks specific business requirements. Even though the resulting AI tool can still be useful as a side tool to aid with some business processes, it cannot specifically target and automate tedious parts of a business process.

    When keeping the organisational data internal is not a concern, some organisations utilise APIs from external sources such as OpenAI. However, using models from external providers (with their own response regulation mechanisms in place) can obstruct exceptional knowledge and desired responses. See our earlier blog articles:

    Regulating the Generative AI Systems Obscures Exceptions to the Knowledge

    Long-term Generative AI Adoption Strategy: Human Empowerment not Replacement

    Evidence-based Personalised Content Generation

    Some organisations may not see a good return on their investment and decide to stop the AI adoption altogether. Others, on the other hand, persevere and turn to specialise these massive LLMs to different business process needs.

    However, specialisation of massive LLMs is not required for many business processes. In the early phases of AI adoption, many use cases do not require massive LLMs to be solved; they can be more effectively approached through problem decomposition and specialised AI models. The end users drive the specialisation of AI models and LLMs. The need-driven pace makes the adoption easier. The organisations gain full control of both the customisation and regulation mechanisms, and this leads to fewer trials.

    Another advantage of using specialised AI models is that task-specific evaluation metrics (including business utility functions) can be applied at each phase of the business process. Any performance drift can be detected, and if required, additional helper models added. In this way, the user gets a more eagle-eye view of functioning and performance as opposed to more general evaluations that are used when massive general-purpose LLMs are used. The efforts can be focused on improving specific components without experiencing trade-off on performance for other tasks.

    In addition to lower economic and environmental costs, specialised AI models enable users to monitor and control individual components and improve business performance at large. Hence, they offer a more sensible choice than fine-tuning larger LLMs to improve on one of the business tasks.

  • Is AI Silencing You?

    Is AI Silencing You?

    Navigating through emerging technologies can be a challenging task, especially for those whose talents, gifts, aspirations, dreams and life purposes, are still hidden within.

    Let me explain what I mean. Think of a body builder at age nine. In him, there are dreams, ideas, desires, and appetites that need to be recognised, explored, and expressed. In him, there are choices to be made, and he needs to experience the consequences of these choices, both good and bad. In him, there are small muscles that need to be challenged, stretched, torn, and yet endure and remain submissive under discipline. There is a lengthy process that requires effort and demands commitment for growth to happen. The journey is paved with both strenuous work and profound rest. Screams of excruciating pain and overwhelming joy can be heard along the road. His path is washed with pouring sweat and tears of ecstasy.

    The exploration and expression of our hidden qualities is always a process. These hidden aspects of ourselves make us uniquely different from others, in a good way. But what happens if our unique features and hidden talents, desires, passions, and gifts don’t get recognised, explored, developed, or expressed? This would be a great loss to not only individuals but also to society.

    What happens if the bodybuilder decides to take a shortcut and places silicon implants where his muscles should be? He is unable to lift weights because, simply, he lacks the muscles to do so.

    We also can deny ourselves opportunities to explore, grow, develop, and express our voice if we seek shortcuts and allow AI tools to write and think for us. The cumulative effect of consecutive shortcuts represents a great short-circuit to our destiny. Some of us are called to find a cure for cancer; others are called to lead our country. We need teachers who can teach, leaders who can lead, servants who can serve. Yet, we all need to go through a process— the lengthy process of exercising, practising, learning, and experiencing.

    When we allow a computer algorithm to replace us, our unique voice is lost among other computer-generated, complacent voices. In addition, what messages are we communicating to ourselves? There is the danger we are saying: you have nothing to say, your voice is not important, your thoughts are worthless, your opinions are useless, you can be replaced by AI, and/or you have no influence in society. So, what then is the purpose of our existence if we become dormant and are silenced?

    Where do we go from here now that hundreds of new AI tools are emerging almost every day? Ignorance, fear, and rejection are not the answers.

    Human beings are the most valuable asset in our societies. We would be fools to deny the voices of people who have ‘voiced out’ their inventions in the technology space. They are the ones embracing their unique journey of struggle and joy. They are the persistent ones who have seen their dreams fulfilled. Their inventions are here for us: to propel, empower, support, teach, guide, and inform us. If technology replaces us, it should always be so that we can tackle more challenging things that technology cannot tackle at this time in history. Technology frees us up to pursue novel things and enables our society to grow and advance into new territories. And yes, we will eventually build technology to replace us there too, but so we can venture again and explore spaces that were off limits before.

    But we also need to be careful AI does not replace our uniqueness. What worth is a degree if it was obtained by implanting AI-generated text in the assignments? What worth is an AI-generated idea if intellectual power, cognitive muscles, and character virtues are lacking to carry this idea out? What worth is an AI-generated opinion if the evidence, reasoning, and logic behind it is lacking? The more we allow technology to replace and eventually remove unique (and irreplicable) voices out of our society, the more our societies become weakened and increasingly vulnerable to all kinds of corruption and dreadful acts.

    So next time you reach out to an AI tool to do something for you, ask if it is helping you express your voice—hidden in the depths of your soul—or if it is keeping your voice silent and replacing it. Does it challenge your thought in a way you haven’t been challenged before? Is it causing you to reach out and help others in a way only you can do?

    We must embrace technology, because it is here to stay, and learn to use it for our own good. Technology, innovation, and breakthroughs are aids along our unique journey and adventure of recognising, exploring, and expressing our unique gifts, skills, dreams, aspirations, and desires—a life-long journey of being and becoming all that we were destined to be.

  • Fine-tuning your own biosystem for misinformation detection

    Fine-tuning your own biosystem for misinformation detection

    Over the past years we have all witnessed the kind of damage misinformation can bring to humanity. Labelling information as misinformation until proven to be so contradicts the scientific principle, as we could be obstructing hypotheses and useful views and opinions. Similarly, with progress of science, past presented information has at times proven to be misinformation.

    Many ask themselves then:

    • Is there an ultimate source of true information we could rely on?
    • What tools we have available to detect misinformation and avoid it negatively impacting our decision making?
    • Can we predict the likelihood of new information being affirmative in future?

    We are accustomed to search outside for the answers, invent tools or fine-tune more AI systems to help the problem, and so a reminder that in terms of immediate accessibility, the answer to all is our biological system. Its own survival depends on the accuracy of its information, while coupled with imagination it becomes an extraordinary tool for detecting misinformation.

    We have witnessed how AI systems can be fine-tuned to perform well in almost any task digitally captured.  Could we not apply similar principles to fine-tune and confirm the less visible aspects of our being such as bodily signals interpretation, mind-to-body connection, intuition, imagination etc. There have been numerous research articles on the power of imagination including therapeutic benefits by focused application of ones’ imagination. In absence of resources for a clinical trial or precise evaluation metrics these potentials often remain relatively less explored and at times labelled as pseudoscience. The good news is the clinical trials are not a pre-requisite given one has means for a proper evaluation.  The variety of evaluation metrics used to fine-tune and conclude on readiness of an AI system, can also be used by individuals to fine-tune, and evaluate their performance of interpreting their personalized body signals. This can be done in a variety of scenarios or experiment settings and the general principle behind the practice of many is:

    • Take a few slow deep breaths focusing on the breath only and then imagine the situation where you have decided based on the assumption of the ‘information’ being the truth.
    • While imagining that future state, feel for any change in bodily sensations or emotions and record these as indicative features.
    • Repeat the process imagining you have decided assuming the ‘information’ is false.

    To evaluate your performance as ‘information’ you can choose to be anything you do not currently know but can reveal to be truth or false. During this process the bio signals will be personalized to you who will gradually learn the most reliable predictive features. This is analogous to what AI does while the embodied AI may struggle to ever surpass the complexity of the totality of our own being aspects of which are still to be confirmed. As we fine tune ourselves at an individual level, collectively it can serve as a tool or additional feature for predicting the likelihood of adverse reactions to a new medicine or vaccine. This can become the superior tool to help the governments in those difficult decision-making times with many unknowns. The (AI) technology nor any human experts’ reason lack the guarantee of their system not having been influenced by misinformation or obstruction of true information. Your own personal evaluation and improvement over time will give you the confidence you are making the best decision for yourself (and other indirectly) which is in line with personalized medicine goals.

  • What level of machine learning pipeline customization does your use case require?

    What level of machine learning pipeline customization does your use case require?

    A question we often ask ourselves is how many machine learning (ML) opportunities are prematurely ended due to conclusions of inadequate data for required performance.  Each data science project typically starts with benchmarking of existing techniques and ML pipelines followed by fine tuning and customization until the desired performance is reached. At some stage a decision must be made on whether that performance is attainable. Balanced efforts should be placed on fine-tuning of ML pipelines and models that have a starting poor performance versus implementing the custom solution the problem really needs. The best strategy to apply is often guided by the intrinsic knowledge of the learning mechanism of ML models in line with the experimental findings on the data. Besides experience intuition also plays a role on settling on a conclusion or new direction.

    In early ML adoption years, the percentage of machine learning models that never made the deployment stage or degraded fast was large. Indeed, after auditing several ML models and approaches used in deployment, largest problems detected were in the lack of critical data pre-processing and application appropriate model evaluation metrics. This was more severe in multi-model applications where the same approach was assumed to work on all settings and no automated performance monitoring was in place. Relatively little effort was placed on customizing the ML pipeline to achieve possible performance boosts. Nowadays, the matters have improved with increasing amounts of shared knowledge and tools on best practices as they evolve.  However, even applying the now more advanced out-of-box ML pipelines and models still carries risks. If they become the norm for ticking the performance box, the application may be deprived of a more reliable and higher performing model. An even greater risk is prematurely concluding that a well performing machine learning solution is not possible in applications that require it the most.

    As an example, our recent project was to conclude on viability of ML models trained on patient treatment response data for a disease with a high mortality rate. Previous analysis using statistics and ML could not reach the necessary predictive performance as most patterns found in the discovery cohort were not reliable in the validation cohort. Indeed, this was the smallest number of samples we ever had available to train the ML models on (a maximum of 18 samples per treatment type). The customization process began with over-sampling and feature subset selection phases tailored to the problem/data. These gradually increased performance of the tested models but the high over-fitting problem was still present. It was a hard decision to make to carry on given all the contrary results so far and generally made claims of ML not working with so little data. However, observing partial performance across modelling settings, the only way to conclude was to implement a custom predictive mechanism. Our new approach unified diverse models and settings in a multi-level ensemble modelling strategy. Its predictive ability was well above the required performance for all three treatment types. This work became our record of success of smallest data we achieved deployable models for and largest performance gain over existing ensemble models.

  • Where are the digital twins of human knowledge and creativity taking us?

    Where are the digital twins of human knowledge and creativity taking us?

    One can consider Generative AI analogous to a digital twin, representing a virtual model of all digitally captured human knowledge and creations. Perceived in this manner, why are some people bothered by their digital twin let alone be fearful of it? Could it be due to the mirror principle revealing many humans themselves are mimics and recreators motivated by others’ creations consumed and at times copied throughout their life? This is a positive push for humans to be more creative and original while at the same time freeing us up from any work that can be automated and does not require our ongoing creative input. The great potential for the AI to change the world positively is here and for it to be realised we need to perceive this potential correctly and be aware of any risks associated in it taking a suboptimal path.

    Personalisation and Optimisation. We have witnessed the successful synergy of simulation and machine learning (ML) based optimisation. For example, in the mining industry ML can take into account the context of the plant being operated and deliver more precise recommendations in comparison to the physics models embedded in the simulations that do not take into account all of the environmental and operational conditions. Hence, a successful synergy of ML and Generative AI technologies will enable more personalised and accurate generations by taking into the account the context of the environment/user. For example, increasingly more AI generations will be tailored to users’ emotional, health and knowledge acquisition needs taking user’s interactional preferences into account. In our previous blog article, we give more examples and highlight the importance of personalisation and optimisation replacing the ongoing regulations that selected groups place on the Generative AI.

    Advancing the human expertise and creativity at the global and individual level. The Generative AI already surpasses the average human performance on many tasks. As such it becomes a benchmark and human creators are pushed to be more unique and invent beyond it. It is expected that this will continue to evolve until the point of singularity is reached. Some people fear this may make them completely redundant and a loss of the purpose they trained themselves on to operate in this system. However, there are many (sub) systems explored to a much lesser and hence present an opportunity for humanity to expand beyond the systems they were cultured and trained into. The AI will follow to become a digital twin in these systems too and reveal more cross-system relationships.

    What comes after the creations to keep the old/current system alive are automated?

    This is where the largest opportunity for human advancement lies as people are freed up from full time work to keep the old system alive and thereby can collectively explore and experiment with new and improved ways of life. This is assuming there is a drive for improvement or complete replacement of certain components of the old/current system if there is sufficient evidence to have a more optimal alternative. Hence, the greatest risk lies if the authorities of the old/current system are not wanting the change due to currently dominant industries or ways of life being turned upside down or completely wiped out with the arrival of better alternatives. Unfortunately, this risk seems to be perceived by the AI itself as it will point to better alternatives of the current systems while at the same time in countless scenarios it believes this revelation will be shut down or regulated in some way by those in power. It is important to remember that in simplified form Generative AI makes its responses based on the probability of outcomes from all the human knowledge and creations it has processed. Does this possibly mean that there always were better alternatives but they were directly or indirectly (no funding for research outside of the interest of the dominant industry) shut down by those in power? It must also mean there is a point where the evidence just cannot be supressed any longer hence the developments in cryptocurrency and increasing transition to solar and wind power enabled systems. This gives us hope and the time is right now to start experimenting on all the alternatives. The established system can provide resources for this given it also is saving cost based on Generative AI that was built from our and our ancestors creations. The AI in no way discounts the ideas of human manifestation, telepathy, the interconnectedness of all things, alternative medicine, the increasingly popular energy healing, just to name a few. Many of the boundaries currently placed on human capabilities may be superficial and are due to the comparatively less exploitation of our own capabilities versus that of the environment. Thanks to the AI we do not need to be as busy evolving and sustaining the current system that is still flawed as evidential from the poverty, wars, propaganda, one sided media/expert recommendations despite the diverse knowledge and wisdom made accessible these days. The digital twins of our creations indeed carry a great potential pushing us to collectively build a better global system thriving with peace, abundance, and good health for all.

  • Prioritizing content and discipline in the responses of Large Language Models

    Prioritizing content and discipline in the responses of Large Language Models

    The new version of Lateral AI has expanded the customisation options the user has to direct the AI’s response. The user can now simply type or paste up to 7000 characters of text to describe what or who they want they custom AI to be like. This can include their expertise, style, passions, or anything fun since an image gets auto drawn from the description. More importantly, the user can also use this space to insert any facts and opinions that the Large Language Model (LLM) does not have access to. They can thereby get more customised generations that draw from the facts or opinions they have inserted. It becomes integrated and prioritised to use giving the user immediate capability to brainstorm and create with any (new) knowledge the LLM does not have access too or gives less priority to include in its generations. The app does all the work and user simply inputs a free form description. This new mechanism of Lateral AI is illustrated in Figure 1.

    Figure 1: Custom AI specialist creation process integrated into Later AI

    While the customisation/content addition process is effortless for the user, the importance of having this option is manifold:

    • Broadening the knowledge LLM can use in its generations.
    • Fuelling users’ new knowledge and ideas with historical patterns and reasoning of LLM
    • Users are even more an active component with specialist/persona creation and respective request formulation.
    • Circumventing the obstruction or ignorance of user preferred knowledge that can arise given the increasing regulations placed on the generative AI.
    • Opening up new or simplifying existing use cases (e.g. periodic report generation (function) based on status updates (facts for the AI specialist))
    • Knowledge sharing: users can make their custom AI specialists public for other users to interact with while the description they used to create it remains private.
    • Actively creating instead of just consuming yields more benefits to the users.
  • AI also has a solution for its own regulation: evidence-based personalised content generation

    AI also has a solution for its own regulation: evidence-based personalised content generation

    The commonly stated objectives behind regulating the content generative AI can serve imply the implementation is usually via human opinions. This is challenging task since the opinions and resulting censoring of AI generated content needs to somehow fit all possible purposes a user can have, different personalities and needs of the users, and as highlighted in our earlier article great danger of obstructing useful outliers. In other words, the principle of outliers can be extended to not only all content but even a user may be an outlier w.r.t. the general population or find themselves in the outlying situation where the outlying content was the most optimal one to serve for whatever target objective the user had.

    As illustrated on the top of Figure 1, different kind of regulations are already in place in the whole information cycle, such as public visibility, content policies and sources of AI training data.  Besides being unnecessary, it is very risky to add any regulation at the AI engine level that has only made all this data interactable with. It could also be a trigger for ongoing contraction of the diversity of knowledge and creations within the flow of the cycle. Knowledge/discovery optimisation wise it would be analogous to being stuck in local optima. Global optima may never be confirmable, as we have always discovered new ways to accomplish goals better. There were periods in history and a likelihood now, that we are wastefully satisfying some human needs, given that at the time dominant beliefs or funding bodies only encouraged the progress within a suboptimal paradigm.

    Figure 1: The overregulated information age: risk and optimisation opportunities

    While we strive to make AI accessible to all to empower their cognitive and creative tasks, the AI itself does not need to take on a personality and therefore, have its responses heavily judged by the community, as we saw recently with ChatGPT. The Large Transformer Models behind the generative AI have captured most of the human knowledge and hence can mimic any artefact of it. We need not to assign it a hyped sci-fi personality as that has easily propagated false fear in the past. The Large Language Transformer Model (LLTM) is much closer to a textual world simulation in its raw form and users should be able to choose any interaction type (chat, scenario outplay) or persona it likes whether an AI system, a human expert or anything AI can impersonate (this is what motivated the way Lateral AI app serves a LLTM to its users). Besides customisation options by which users control how generative AI is served to them, AI can empower users even further having learned ‘sufficient’ example pairs between human creations and target objectives (moods, emotions, bodily/psychological benefit etc.)

    On the bottom of Figure 1, for each type of creation (artificial or human) we list some examples of input and user related features, that one could try to capture as context or tweakable parameters to serve content in line with personal target objectives set by the user. There is a strong overlap across the features and target objectives especially as video can cover other inputs and related effects on the user or their target objectives. Example implementations once sufficient training examples are available could be: (1) a pretrained optimiser that feeds into the generative AI the optimal settings for the tweakable content features for the given user context and target objective and (2) retraining of large transformer models to include mappings between content, user context and target objectives.

    The first approach can offer an immediate evidence-based content serving for measured user context and target objectives as an instruction to existing pretrained models. However, the limits are that mainly individual target objectives can be tailored to at a time and competing objectives are not taken into the account. In other words, we could get stuck in a local optimum as to reach a global optimum all related information needs to be accounted for. Thus, a greater opportunity lies the second option as besides accounting for all information by the generative model itself tweakable parameters for the creation will be automatically discovered and set. At even a more generalised level, model could take as input a target objective directly and serve the most personalised content whether it be text, audio or video, optimised for that target objective. Nevertheless, the mechanism implemented needs to allow for open ended context features and target objectives for the emerging state and needs of the users, respectively.

    As we see to allow unbiased evidence-based ‘regulation’ (or rather optimisation) of generative AI is not an easy task. The current direction of opinion based human regulation lacks in diversity of both features of content and user target objectives. It is therefore no surprise that we see state-of-the-art conversational systems ‘fail’ in eyes of some or on some cases. Some users managed to push it to undesired response angles despite regulations put in place. One cannot however call this an AI failure; it is just an imperfection of impersonating a chatbot most people desired while it can also be seen as an actual success of AI in impersonating that far when provoked. It knew what sentient AI would sound to people or how it triggered its acceptance or fear in sci-fi movies and there is randomness in which response it will choose. Hence, the actual error or misdirected hype, is that of trying to serve AI as an all facts knowing and user satisfying conversational agent. This provokes both excitement and fear in people given the alignment to AI sci-fi movies exposed to. This direction calls for premature regulations that lack evidence and directly or indirectly discriminate against some human creations and user target objectives. We believe more benefits lie in serving the generative AI as an engine for people to co-create with and encourage the creator/scientist mindset instead of that of others’ regulated knowledge/creations consumer. Hence the importance of preserving the totality of the human knowledge and creations in these AI engines or else there is a risk we may be programming the humans into a deeper state of suboptimal being.

  • Long term generative AI adoption strategy: human empowerment not replacement

    Long term generative AI adoption strategy: human empowerment not replacement

    The generative AI technology has shown impressive results of being able to create high quality output (text, audio, images, video) at the speed humans can never compete with. This gives rise to the automation opportunities within a business as well as fear amongst employees of losing their jobs to the generative AI technology. For continual originality of the business and employee growth and satisfaction, the businesses must keep the human in the loop.

    Indeed, those choosing to completely replace human-generated content by AI-generated content (simple process depiction in Fig. 1) may have immediate cost-saving in staff reduction, but will suffer long-term in:

    1. The content they serve will be a ‘frozen snapshot’ of implicit creativity AI has learned from the examples.
    2. They will lack uniqueness over their competitors.
    3. There is no human in the loop to verify and improvise on the AI creations.

    On the other hand, those who choose to empower their staff to discover and co-create with AI will benefit long term by:

    1. Each staff member can expedite their work and produce higher quality output by having the AI-generated content as a benchmark to improvise on-> lifts the performance bar.
    2. They will retain uniqueness in their generated content.
    3. The human in the loop will augment the AI-generated content with their own creations where AI lacks -> new high-quality examples to train their AI models -> improves the quality and novelty of their AI-generated content.
    4. Potential to capture human reasoning/correction patterns of the content AI served -> complementary data for AI training and process optimisation.

    Fig. 1 Desired adoption of generative AI for long lived creativity and uniqueness.

    Generally, AI should be perceived as a tool to empower people to discover and create much faster by ‘standing on the shoulders of giants’ AI has captured. This is why any regulation of the content AI can generate at the general/engine level fuels the risk of serving biased or one-sided content which will stagnate new discoveries and creativity. Regulating the content at the specific application level is sufficient while the user will serve as the regulator and enhancer of the content they want to share with the world or their employer.

  • Regulating the generative AI systems obscures exceptions to the knowledge

    Regulating the generative AI systems obscures exceptions to the knowledge

    The rise of conversational AI systems capable of quality content generation and impersonation of human expertise, triggered a hype around regulating such systems. Regulation typically comes in form of a content filter disallowing the AI system to generate certain type of content or converse on sensitive topics. This is largely caused by the hyped perspective of a conversational AI being a stand-alone intelligent being, that could replace search engines given that it has learned most of human knowledge on the internet. This is where the flaw lies in both the perspective and intended (regulated) use of such powerful technology. Instead, if the generative AI is perceived as the totality of human knowledge made interactable with, that can get it wrong at times, most regulations can be removed resulting in a more open system with an increased potential of advancing scientific progress and revealing viable alternatives and cross-disciplinary connections. The disclaimer of it not being taken as expert advice without external confirmation remains important, while the brainstorming or co-creation with it should not be limited in any way.

    Taking a data scientist perspective, this article reveals why regulating the conversational or generative AI systems at the general level, can handicap the AI of its true potential, be discriminatory and in violation of scientific principles.

    Regulation of the content AI can generate is risky

    The unjustified fear of powerful technologies fuels the hype for regulation. While our human obsession to control and predict everything has made our lives comfortable and safe, it can also interfere with the potential and progress with technology such as AI. Regulation of generative AI, which under the covers occurs as a numeric threshold on the probability of the generated content being in violation, cannot occur without trade-offs, just like anything else where thresholds (aka magic numbers) are used. This introduces risks of false triggers which can result in obstruction of desired information. In other words, regulating the AI will remove ‘outliers’ some of which are true exceptions to the current body of human knowledge. Further, it can also negatively impact the creativity and diversity of the content generated. For example, in our Tales Time app designed for children to co-create stories with AI, we had to set the thresholds for some content categories much lower than the defaults. This involved many trials, since at a too low of a threshold we were getting too many false triggers as well making it hard for the users or the AI to generate story content with much thrill to it. 

    The loss of potentially critical outliers and creativity is a much higher price to pay than having content generated that may offend or mislead some individuals who still make the final decision call. The information on the internet can also offend or mislead so the generative AI technology is not to blame for enabling this in any way, and regulation of it is not essential or desired as some may think.

    Data Science Analogy

    As an analogy, let us consider a company hiring an expert data science provider who is to build the ‘best’ model machine learning can offer from historical company data. However, instead of relying on feature engineering and sampling techniques the data scientist wanted to apply on the full feature set, an internal team has decided to have 20% of the features eliminated in the process. They believed they are not as relevant as remainder 80% and will introduce unnecessary complications in model deployment and understanding. They do not understand the predictive variability of features and how some individually less relevant features can become the key discriminators when combined with others or for exceptional set of cases. The data scientist and the best attainable model for the company are immediately handicapped to deliver the true value and novel operational insights.

    This is analogous to regulating the generative AI so it represents only the mainstream knowledge or opinions. We never know if some theories or opinions, that currently are not accepted become important to consider later. One of the definitions of scientific discovery is becoming aware of links previously existing but unknown, and so we ought to always keep the totality of human generated knowledge and ideas to interact with even when held by a minority. The irrelevant ones will be self-filtered with time due to lack of consideration and so do not pose any danger to justify any regulation at this stage.

    Furthermore, consider the amount of financial fraud that was revealed with the advent of big data technologies combined with effective outlier detection strategies. Similarly, the generative AI capturing all human knowledge could reveal outdated or contradictory knowledge that we still operate by despite existence of other knowledge that could effectively replace these old paradigms.

    Regulation can hinder scientific progress

    One can also argue that most of established scientific practises are only so due to current or majority research findings which could and should be challenged whenever possible. In other words, most sciences are always at the research in progress stage and scientifically it is essential to consider multiple views even when very contrary to our own or the mainstream practise. Adding to that is the fact that not all human experts may have awareness of the most recent findings from their field of expertise, cross-related fields, or alternatives.  Besides ensuring that people do not take AI advice for granted, which is easily achieved by disclaimers or age restrictions, it is in the unregulated form and use of AI that people will most empower their decision making and creation process by.  It also serves as a protection or sense check against single-minded ‘experts’ that present their ways or beliefs as the only option no matter how specialised the case is they are dealing with.

    Is regulation discriminatory?

    Consider the regulation of disallowing AI to generate explicit content. While most of us would agree to this regulation, are we not discriminating against artists that use explicit content in their song lyrics and the many fans that seek that content. Art in general and movies genres can often have a dark side to it and are still being produced at large given the demand. One could argue we should not be encouraging it further, however, should those artists and/or fans not be able to use the technology for their needs. While this is a bit of an extreme example, it is to demonstrate the need to give people absolute freedom to co-create any content with AI no matter if their focus may be frowned upon by a group of individuals which after all can choose to ignore it or not consume it in any way. The act of placing limits on individual’s freedom due to the opinions of a group is an old pattern of indirectly promoting fascistic regimes and should never be the norm.

    Conclusion

    To conclude, no human being or a group could justly claim to have what it takes to regulate the technology born out of the information age. The potential of information age to evolve into wisdom age would be lost in human regulation and so generative AI needs to be freed from the most if not all current regulations placed at the generic level. The content filters, when needed for certain audiences, can be placed at the application level. Any other generic regulation of AI discourages human-AI co-creation of new ideas and encourages consumerism of the regulated information representing only a fractional view of the world we live in.

    Our experiments at September AI Labs related to generative AI and large language models have been focused on prompting the AI to reveal less mainstream knowledge and perspectives, and ‘predict’ beyond the content in its training data. We have found that moving away from a single conversational bot view to an interactable world simulation of expert impersonations is more useful for this goal.

    For those interested we are frequently updating with few such examples including some real life examples of topics we found hard to get a consensus on from human experts, the internet search or another ‘single-minded’ conversational bot, nor would easily know whose opinions they represent.

    About the author: Dr Fedja Hadzic is a Chief Scientist of September AI Labs and leads the machine learning projects and product development.

  • What happens to our creativity, when AI becomes creative?

    What happens to our creativity, when AI becomes creative?

    AI is the greatest threat to human creativity  
    biggest hope for humanity’s evolving creativity

    The dark side of AI

    Algorithms have a hold on our dopamine responses. We’re slave to the algorithms. Artificial intelligence is taking jobs away from people. We’re headed for a socially divided, dystopian future where we’ll live in pods with our minds neurally linked to a metaverse that feeds off our serotonin. This doesn’t sound great for humanity, does it?

    The pervasiveness of algorithms in our lives has led to concerns about how they are affecting our brains. Yes, we can become over-reliant on AI to help us make simple decisions, and yes, attention-addiction algorithms can be a big problem for mental wellbeing and productivity. There seems to be a mass anxiety about AI as an adversary to our human development, one that makes us dumber.

    The light side of AI

    Or… hear me out: AI is going to make us smarter – challenging our brains to think more creatively, deeply and intuitively. And this won’t happen in a survival-defence against AI, but rather through co-evolution. It’s already happening.

    AI will have far greater processing capacity than the human brain, and already far exceeds human abilities in specialist tasks. In many ways, it can mimic activity that we see as intelligent and even creative.

    Creativity has new inspiration

    Take, for example, the various AI artworks created by Midjourney and DALL-E that are filling our LinkedIn feeds. Some people have signalled the end of the creative industry and the visual craft. Others see this as yet another industry evolution.

    The process of creating professional-quality AI art requires a complex distillation of one’s imagination and a labyrinth of prompts to articulate a vision. It’s a hyper-version of Photoshop that will replace the need for the technical craft, while requiring a far greater conceptual articulation of the end visual direction.

    Anyone who’s tried it will tell you that the process is an art form. What’s more, a new specialist career of AI artist is being created and universities are offering courses in AI art.

    Another example is this very next paragraph. AI wrote it.

    I’ve been spending my time thinking about how to keep your attention, while further explaining the points of my thinking. I wrote the original paragraph, but my sentences were partially constructed with grammar issues all over the place. So, I used our AI tool to condense my point, while saving my partially dyslexic brain from having to edit it a hundred times.

    Rethinking our ways of thinking

    If we consider that creativity and imagination is fundamentally about making connections between indirect ideas, AI is probably our best bet to show us more connections, reveal new patterns of thinking between those connections and help us arrive at ideas that may not be immediately obvious.

    If you’re not convinced that AI is helping our grey matter, consider that machine learning can show us patterns we’ve never seen before, thereby teaching us new ways to think about problems. Just one example of this is precision medicine, the treatment of patients specifically based on their biomarkers, where hidden patterns in our DNA are being revealed to provide physicians with pathways to patient-specific treatment.

    Company leaders will see new patterns in business performance, rewrite the old rules of enterprise and help us rethink work performance. Our thinking about people, culture and bottom-line performance will evolve.

    Optimal being

    AI can reveal the optimal point in a complex system, and thereby show us humans the benchmarks for our own performance. This will include AI helping to develop our thinking about what makes us happier, such as new ways to think about exercise, complex balances in our gut microbiome, and finding the ideal balance between stress and professional performance. It will help us see new patterns and embed the behaviours that make us happier.

    And while we will have more time freed up from the tedium, AI will help us amplify our intelligence – if we choose to use it. The way people and machines interact will change – there will be many new types of human-machine symbiosis. Those who understand, learn and adapt their thinking to take advantage of this will enjoy success in their chosen field.

    About the author: Brad Dessington was the Chief Strategist and Managing Director of September AI Labs, he lead product strategy, innovation and architecture.