Lobian's Commitment to Transparency and Responsibility
On Generative AI & Ethics:
We Do Not Use Generative Images or Video in Our Products
Lobian never deploys generative images or video as final products. Our 3D environments, visualizations, and interactive experiences are built by real human artists.
We never use this professional work as data for training our own models.
In our personal workflows, we may experiment with generative tools to explore ideas quickly, but only with careful attention to resource use and ethics. We similarly use generative text and code as assistive technologies, but never as replacements for our current or potential colleagues. We cut workloads, never staff.
Our leadership reflects this commitment as we're a team of three creatives: An art technician and painter; a game developer and designer; and a 3D model artist & technician. We bring lived professional experience in creative industries to every decision we make.
On Data and User Consent
We believe ethical AI systems are built on trust, and trust is built on clarity. Lobian collects user interaction data only when we have explicitly notified users in clear, straightforward language about what we're collecting and why. If a user isn't informed, we then simply don't collect.
Consent is always opt-in. A simple and distracting checkbox with information readable in under ten seconds, not pages of legal text. We do recognize certain implicit consent scenarios: purchasing something through our tools means our partner support teams can access that transaction if something goes wrong; asking about a painting in a gallery means that question enters a closed system designed only to answer you, not to identify or market to you. Other than these obvious necessities, we never assume we have permission.
You Are Actually in Control Here
Anyone can request deletion of their data from our systems at any time, regardless of jurisdiction. We'll honor the request even if local laws don't require it. Send us a regular email; we're humans on the other end, not a compliance robot.
On Data Sharing and Partnerships
Data we provide to clients through Lobian Link is anonymized when it relates to purchase patterns and usage trends. We never share user account-tied data we've collected for research purposes. That stays private and we delete as much as we can. Our partners are carefully vetted; we don't work with just anyone, and we're transparent about who they are. When we collaborate, we do so because the work genuinely improves how our partners operate internally, not just how they look externally. Anonymized data may be shared with business partners as part of products we develop together, but users will never wonder which data falls under this category because we make sure it's impossible to miss.
On Safety, Bias, and Function
Our models operate on defined functions and can't deviate much from them.
But we're honest: LLMs carry bias, and we as humans carry biases too. We do our best with current standards, but we don't pretend perfection is possible. We keep our bias-testing methodologies internal to prevent bad actors from exploiting known vulnerabilities, but we share our findings with our partners so they can build more accurately.
Our Commitment to Creatives
We're a team of creatives and we're connected with local communities and art scenes in our fields. We see how AI is reshaping creative industries, and we're committed to building tools that expand what creators can do rather than discard them.
We support human artists, designers, and professionals. We're always here to listen to and collaborate with non-technical creatives. They're the ones who help us navigate how Lobian will continue to work for creative communities, not against them.