<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives des Transformation &amp; IA - JIN, Agency in Europe (France, UK, Germany...)</title>
	<atom:link href="https://jin.eu/themes/transformation-ia/feed/" rel="self" type="application/rss+xml" />
	<link>https://jin.eu/themes/transformation-ia/</link>
	<description>Un site utilisant JIN</description>
	<lastBuildDate>Tue, 20 Jan 2026 12:19:39 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Atchik joins Opinion Act within the JIN Group to form Europe’s largest team in digital opinion analysis</title>
		<link>https://jin.eu/atchik-joins-opinion-act-within-the-jin-group-to-form-europes-largest-team-in-digital-opinion-analysis/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 20 Jan 2026 12:19:01 +0000</pubDate>
				<category><![CDATA[Agency News]]></category>
		<guid isPermaLink="false">https://jin.eu/?p=2934</guid>

					<description><![CDATA[<p>This partnership will give rise to the largest digital opinion analysis team in Europe, supporting 90 clients with revenues of 6 million euros, under the leadership of Romain Ponceau. It combines Atchik’s expertise in monitoring and moderation with the power of the AI-augmented team at Opinion Act by JIN. The largest consulting team in Europe...  <a class="excerpt-read-more" href="https://jin.eu/atchik-joins-opinion-act-within-the-jin-group-to-form-europes-largest-team-in-digital-opinion-analysis/" title="LireAtchik joins Opinion Act within the JIN Group to form Europe’s largest team in digital opinion analysis">Lire la suite &#187;</a></p>
<p>L’article <a href="https://jin.eu/atchik-joins-opinion-act-within-the-jin-group-to-form-europes-largest-team-in-digital-opinion-analysis/">Atchik joins Opinion Act within the JIN Group to form Europe’s largest team in digital opinion analysis</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This partnership will give rise to the largest digital opinion analysis team in Europe, supporting 90 clients with revenues of 6 million euros, under the leadership of Romain Ponceau. It combines Atchik’s expertise in monitoring and moderation with the power of the AI-augmented team at Opinion Act by JIN.</p>



<p><strong>The largest consulting team in Europe serving reputation</strong><br><br>With more than 45 specialists dedicated to monitoring, moderation, and analysis of online conversations, Opinion Act and Atchik now form the largest European team in their field. This new organization offers unique data coverage and a presence at the heart of conversations 7 days a week, 24 hours a day, to secure clients’ reputations, inform executive decision-making, and support brands in building lasting relationships with their communities.</p>



<p><em>“This partnership is a major turning point. We are combining Atchik’s excellence with Opinion Act’s AI-driven innovation to offer our clients a unique solution for reputation protection” </em>says Edouard Fillias, CEO of the JIN Group.</p>



<p><strong>A shared vision: building trust in the age of AI</strong></p>



<p>Artificial intelligence is already central to Opinion Act by JIN’s methods, from measuring communication impact to anticipating crises. This momentum is strengthened by the arrival of Atchik, which provides new conversational data from organizations’ social networks, as well as proprietary AI-assisted response technology.</p>



<p><em>“AI allows us to move beyond traditional approaches and deliver real anticipatory capabilities, complemented by multilingual expertise. Our expertise is further strengthened in securing our clients’ reputations as they face the growing spread of fake content” </em>emphasizes Romain Ponceau, Managing Director of Opinion Act by JIN.</p>



<p><em>“It is the historic know-how of Atchik and Opinion Act, the expertise of our professions, combined with the power of AI, that enables us to deliver the best possible service to our clients”</em> says Brice Le Louvetel, Deputy Managing Director of Atchik.</p>



<p><strong>A unified offering, available 7 days a week, 24 hours a day and enhanced by AI</strong></p>



<p>Atchik’s organization, dedicated since its inception to managing large volumes of conversations 7 days a week, 24 hours a day, strengthens Opinion Act’s reputation protection capabilities. This partnership responds to a growing need among companies to have continuous monitoring and analysis capabilities in a dense and constantly active digital environment. Opinion Act x Atchik will now offer:</p>



<ul class="wp-block-list">
<li>An impact methodology to support executive decision-making</li>



<li>Real-time and anticipatory monitoring, enhanced by AI, to secure reputation</li>



<li>Dedicated studies strengthened by AI</li>



<li>Moderation 7 days a week, 24 hours a day</li>



<li>A community management offering to turn comments into conversations with customers</li>
</ul>



<p><strong>A structuring transaction at the heart of JIN’s growth strategy</strong></p>



<p>The success of this transaction marks the completion of a first phase of external growth for the JIN Group and opens a new stage of development focused on international expansion, strengthening data and AI expertise, and the entry of new strategic partners, with the ambition of reaching 40 million euros in revenue within three years.</p>



<p>In this context, Effective Capital now supports JIN as a strategic advisor in this new phase of acquisitions and international expansion, alongside the management teams.</p>



<p></p>
<p>L’article <a href="https://jin.eu/atchik-joins-opinion-act-within-the-jin-group-to-form-europes-largest-team-in-digital-opinion-analysis/">Atchik joins Opinion Act within the JIN Group to form Europe’s largest team in digital opinion analysis</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Transparency and opacity of AI systems: what impact on our rights?</title>
		<link>https://jin.eu/transparency-and-opacity-of-ai-systems-what-impact-on-ourrights-2/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 30 Jan 2025 10:25:20 +0000</pubDate>
				<category><![CDATA[Blog posts]]></category>
		<guid isPermaLink="false">https://jin.eu/?p=2727</guid>

					<description><![CDATA[<p>However, this idea fails to take into account the complex workings of artificial intelligence, which relies on statistical links far removed from human reasoning. learning AIs are often unable to explain how they arrive at their results. This lack of transparency, or “explainability”, turns them into veritable “black boxes”. AI to revolutionize healthcare The future...  <a class="excerpt-read-more" href="https://jin.eu/transparency-and-opacity-of-ai-systems-what-impact-on-ourrights-2/" title="LireTransparency and opacity of AI systems: what impact on our rights?">Lire la suite &#187;</a></p>
<p>L’article <a href="https://jin.eu/transparency-and-opacity-of-ai-systems-what-impact-on-ourrights-2/">Transparency and opacity of AI systems: what impact on our rights?</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><br>However, this idea fails to take into account the complex workings of artificial intelligence, which relies on statistical links far removed from human reasoning. learning AIs are often unable to explain how they arrive at their results. This lack of transparency, or “explainability”, turns them into veritable “black boxes”.</p>



<p></p>



<p><strong>AI to revolutionize healthcare</strong></p>



<p>The future of AI in healthcare aims to become an increasingly complex tool, harnessing ever more data. It will no longer be just a matter of making a diagnosis for a pathology, but of carrying out truly comprehensive assessments, combining imaging, medical biology, and no doubt connected health objects (such as ECG via the Apple Watch). Jean-Emmanuel Bibault, oncologist at Georges Pompidou Hospital, predicts that, very soon, medicine will be unable to understand the diagnoses provided by AI.</p>



<p>Some AIs are already capable of detecting breast or pancreatic cancers several years before they are likely to appear. Imagine the reaction of a patient who is told that an AI estimates that he or she has an 85% chance of developing a fatal cancer within two years, without medicine being able to explain how the AI arrived at this diagnosis. Just as mechanics plug in a computer to diagnose faults, doctors risk losing their central role in diagnosis. This is inevitable, as AI is already better at it. As Jean-Emmanuel<br>Bibault points out, AI diagnoses from clinical pictures with a success rate of 87%, while doctors achieve only 65%.</p>



<p>One specialized AI, DrOracle, even scored 97/100 on the US medical school exit exam (compared with 75 for ChatGPT-4). An impressive score, all the more so as it only takes around 70% to pass this exam.</p>



<p>Efforts are underway to improve AI transparency. Researchers are working on explainability techniques, aimed at making AI decisions more comprehensible to humans. These approaches often combine deep learning with expert systems, the latter operating on rules of causality defined by science. However, these solutions often constrain the potential of AI.</p>



<p>In the medical field, AI will be rigorously controlled by research and doctors themselves. But what about banks refusing a loan, recruiters dismissing a candidate, or schools rejecting an enrolment? Will they go to such lengths to control their AIs? In a context of permanent quest for productivity, nothing is less certain. </p>



<p>Our rights in the face of AI</p>



<p>Following the application of the RGPD (General Data Protection Regulation), a European regulation aimed at protecting the personal data of EU citizens that came into force in May 2018, the president of Italy&rsquo;s main employers&rsquo; union had ironically stated “America innovates, China copies, Europe regulates”.</p>



<p>Under the impetus of its European Commissioner Thierry Breton (former Minister of the Economy under Jacques Chirac) the EU has further illustrated this new adage by being the first to regulate AI, demonstrating a certain reactivity and understanding of the stakes of AI in the future.</p>



<p><strong>The IA Act to manage risks</strong></p>



<p>The AI Act, which came into force in August 2024, defines 5 levels of risk: minimal, limited, general use, high and unacceptable. Minimal-risk AI includes technologies such as spam filters, voice assistants like Alexa and Siri, product recommendations, and machine translation. These tools are considered low-risk, and are not subject to any particular regulatory requirements.</p>



<p>However, AIs classified as limited, such as chatbots, content filters on social networks and content recommendations (Netflix, press…), must now be transparent about how they operate and how the data they process is used.</p>



<p>General-purpose AIs, which include advanced virtual assistants such as ChatGPT and predictive analytics platforms, are subject to more stringent requirements. These systems must implement rigorous risk management throughout their lifecycle, guarantee the quality and representativeness of the data used, and provide detailed technical documentation. Transparency is fundamental: users need to know that they are interacting with AI. In addition, human oversight must be built in to enable appropriate supervision, while levels of accuracy, robustness and cybersecurity must be maintained at a high level to avoid errors and hacking.</p>



<p>High-risk AIs, used in sensitive sectors such as healthcare, education, recruitment, critical infrastructure management (power…), law enforcement and justice, are subject to similar but even stricter obligations. These systems must comply with rigorous standards to guarantee their security and fairness. Facial recognition for surveillance also falls into this category, underlining the need to regulate potentially intrusive technologies.<br>Finally, the unacceptable risk level prohibits AIs involved in subliminal manipulation (advertising, social networks, games…), social rating, and real-time biometric surveillance (facial recognition, but possibly also tattooing), with a few exceptions, such as investigations into kidnappings or terrorist threats. These restrictions are designed to prevent unregulated mass surveillance and protect individual freedoms.</p>



<p><strong>Regulatory challenges in Europe</strong></p>



<p>This legislation is part of a growing global trend to regulate emerging technologies. In the USA, debates on AI regulation are gaining momentum, but the country is taking a more innovation-led approach.</p>



<p>The AI Act could well become a model for other parts of the world seeking to regulate AI in a balanced way. It&rsquo;s an important first step, but it can&rsquo;t answer all the questions posed by the rise of artificial intelligence.</p>



<p>The speed of technological advances calls for adaptable and evolving regulation. This is because the AI Act leads to legal instability, which could slow down the development of AI on the continent. Apple has delayed the European launch of its Apple Intelligence product, and Meta has postponed the release of the latest version of its open-source LLama model.</p>



<p>Positive spirits see this as an opportunity for European companies like Mistral AI. Nevertheless, the question remains: will they be able to keep up with the pace of innovation while complying with strict rules that their foreign competitors are not obliged to follow?</p>



<p>The answer to this question may well determine the future of AI in Europe.</p>
<p>L’article <a href="https://jin.eu/transparency-and-opacity-of-ai-systems-what-impact-on-ourrights-2/">Transparency and opacity of AI systems: what impact on our rights?</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>JIN assists Perplexity with its press relations and consolidates its position as an expert AI agency</title>
		<link>https://jin.eu/jin-assists-perplexity-with-its-press-relations/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 23 Jan 2025 14:59:05 +0000</pubDate>
				<category><![CDATA[Agency News]]></category>
		<guid isPermaLink="false">https://jin.eu/?p=2723</guid>

					<description><![CDATA[<p>L’article <a href="https://jin.eu/jin-assists-perplexity-with-its-press-relations/">JIN assists Perplexity with its press relations and consolidates its position as an expert AI agency</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>L’article <a href="https://jin.eu/jin-assists-perplexity-with-its-press-relations/">JIN assists Perplexity with its press relations and consolidates its position as an expert AI agency</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Moderating with AI: blocking hate without undermining our freedoms?</title>
		<link>https://jin.eu/moderating-with-ai-blocking-hate-without-undermining-our-freedoms/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 16 Jan 2025 11:03:01 +0000</pubDate>
				<category><![CDATA[Blog posts]]></category>
		<guid isPermaLink="false">https://jin.eu/?p=2731</guid>

					<description><![CDATA[<p>AI, an opportunity to combat online hate Yet AI can be seen as an opportunity in the fight against online hate, where it is capable of unparalleled effectiveness. Today&#8217;s moderation systems are based on user reports and manual checks by teams that are often undersized. It is not unusual for a hate post to circulate...  <a class="excerpt-read-more" href="https://jin.eu/moderating-with-ai-blocking-hate-without-undermining-our-freedoms/" title="LireModerating with AI: blocking hate without undermining our freedoms?">Lire la suite &#187;</a></p>
<p>L’article <a href="https://jin.eu/moderating-with-ai-blocking-hate-without-undermining-our-freedoms/">Moderating with AI: blocking hate without undermining our freedoms?</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong>AI, an opportunity to combat online hate</strong></p>



<p>Yet AI can be seen as an opportunity in the fight against online hate, where it is capable of unparalleled effectiveness. Today&rsquo;s moderation systems are based on user reports and manual checks by teams that are often undersized. It is not unusual for a hate post to circulate for hours, or even days, before being deleted, giving thousands of people time to see it, share it and imitate it.</p>



<p>Artificial intelligence algorithms, by combining computing power and speed of analysis, offer unprecedented responsiveness compared with traditional human moderation methods. Major social media platforms and other online services already use sophisticated algorithms to identify and moderate hateful content, often in real time.</p>



<p><strong>Riot Games and Google: early and striking examples</strong></p>



<p>Riot Games, the company behind the popular video game League of Legends, is an early and prominent example of the use of AI to moderate online behaviour. Riot Games had developed a system called the Tribunal, where players could review reported cases of inappropriate behaviour, such as threats, racism, sexism or homophobia.</p>



<p>Players&rsquo; votes &#8211; over 100 million in total &#8211; were used to train an AI capable of detecting toxic behaviour. The results have been impressive, with a 40% reduction in verbal abuse since the programme was launched.</p>



<p>Another striking example of the potential effectiveness of AI is Google, which uses deep learning algorithms to moderate comments on YouTube. In 2020, the platform announced that its AI systems were able to detect 95% of content violating its rules before it was even reported by users. What&rsquo;s more, the AI systems were able to remove more than 50% of hateful comments within 24 hours of publication. Despite this, Google was obliged to arbitrate, with the help of human staff, double the number of complaints about withdrawn content.</p>



<p>However, algorithms can sometimes misinterpret context, which can lead to false positives (innocent content marked as hateful) and false negatives (undetected hateful content). The subtlety of language and the use of roundabout terms complicate the task of AI systems. Furthermore, the effectiveness of algorithms varies between languages and cultures, making the uniform detection of hate speech even more complex.</p>



<p><strong>The limits and ethical issues of automated moderation</strong></p>



<p>A delicate balance must be struck between censoring hateful content and protecting freedom of expression. Incidents such as the censorship of Gustave Courbet&rsquo;s painting ‘The Origin of the World’ by Facebook, which was mistakenly deemed pornographic, illustrate the risks of automated moderation.</p>



<p>To maximise the effectiveness of AI models, it is necessary to use larger and more diverse data sets and to develop advanced contextualisation techniques. The close integration of human moderators into the process of reviewing content flagged by AI is also essential, creating hybrid systems that combine the strengths of AI and human expertise. Facebook announced in 2021 the use of AI systems to filter problematic content, while maintaining teams of human moderators for the most complex decisions.</p>



<p><strong>Transparency and regulation, essential conditions</strong></p>



<p>Regular audits of moderation algorithms are necessary to identify and correct potential biases. It is also important to provide greater transparency on how algorithms make censorship decisions and to allow users to challenge these decisions. In the US, civil rights groups have called for greater transparency and accountability in the use of AI to moderate content, highlighting the risks of discrimination or unfairness.</p>



<p>In France, the fight against online hate has taken on a legal dimension with the Avia law, which obliges platforms to remove hateful content within 24 hours of it being reported. Although ambitious, this legislation has raised questions about the ability of platforms to respond effectively and the risks to freedom of expression. AI could offer a solution by enabling faster and more accurate detection of problematic content, but it must be used with discernment and framed by clear regulations. AI, if used responsibly and ethically, could well be the key to cleaning up digital environments.</p>



<p>It offers an unprecedented capacity for rapid reaction and precision of analysis, far surpassing traditional methods of human moderation. Ultimately, AI can play a central role in creating a calmer Internet. The examples of Riot Games, Google and the legal initiatives in France show that AI can provide effective solutions, but they must be applied sensibly to protect both the safety of users and their fundamental rights.</p>
<p>L’article <a href="https://jin.eu/moderating-with-ai-blocking-hate-without-undermining-our-freedoms/">Moderating with AI: blocking hate without undermining our freedoms?</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LJ Com joins JIN to become the leading independent healthcare communications provider</title>
		<link>https://jin.eu/lj-com-joins-jin-to-become-the-leading-independent-healthcare-communications-provider/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 16 Oct 2024 16:22:00 +0000</pubDate>
				<category><![CDATA[Agency News]]></category>
		<guid isPermaLink="false">https://jin.eu/?p=2738</guid>

					<description><![CDATA[<p>‘Winner of 50 awards since its creation in 2000, LJ Com benefits from a long experience and recognition in the field of health communication. It&#8217;s a strong team, highly recognised by professionals in the sector, for whom health is above all a mission to serve others,’ explains Edouard Fillias, CEO of JIN. &#8216;JIN is a...  <a class="excerpt-read-more" href="https://jin.eu/lj-com-joins-jin-to-become-the-leading-independent-healthcare-communications-provider/" title="LireLJ Com joins JIN to become the leading independent healthcare communications provider">Lire la suite &#187;</a></p>
<p>L’article <a href="https://jin.eu/lj-com-joins-jin-to-become-the-leading-independent-healthcare-communications-provider/">LJ Com joins JIN to become the leading independent healthcare communications provider</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><br>‘<em>Winner of 50 awards since its creation in 2000, <a href="http://ljcom.net">LJ Com</a> benefits from a long experience and recognition in the field of health communication. It&rsquo;s a strong team, highly recognised by professionals in the sector, for whom health is above all a mission to serve others,</em>’ explains Edouard Fillias, CEO of JIN.</p>



<p><em>&lsquo;JIN is a leader in public relations, whose digital expertise and mastery of data will strengthen our capacity for advice and action. In the future, we will be better able to support our clients as the world of healthcare undergoes major changes,</em>’ says Laurence Jacquillat, Chairman of LJ Com.</p>



<p>Faced with these new challenges in healthcare, JIN and LJ Com will be in the front line to help all the players in the ecosystem to speak out in a way that is accurate, appropriate and amplified.</p>



<p>The group will bring together a unique range of communication skills: analysis and research to understand communities, strategy and consultancy to guide, media relations, social networks, public affairs and events to bring people together and inform, and impact measurement to assess the performance of our systems.</p>



<p>Our clients include major players in the pharmaceutical industry, associations and hospitals, as well as institutions such as Organon, Cerba, RELX/Elsevier, ANSES, Abbvie, Janssen, AstraZeneca, Leo Pharma, Toulouse Oncopole, Fondation A. de Rothschild hospital, French Federation of Orthodontics, French Diabetes Society, etc.</p>
<p>L’article <a href="https://jin.eu/lj-com-joins-jin-to-become-the-leading-independent-healthcare-communications-provider/">LJ Com joins JIN to become the leading independent healthcare communications provider</a> est apparu en premier sur <a href="https://jin.eu">JIN, Agency in Europe (France, UK, Germany...)</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
