概要
阪神タイガースファンとオリックス・バファローズファン。互いに関西のプロ野球界を代表する二大勢力だが、近年、両者の“応援語彙”に新たな波紋が広がっている。それが「おりほー」——オリックス勝利時のスラングである。阪神ファンが交流戦や日本シリーズでオリックスの勝利後、SNSで「おりほー!」というコールを目撃し戸惑う場面が話題となった。この現象は、「推しチーム語彙」が地域やファン層を超えて拡大しつつある今、応援文化やファン心理にどんな影響をもたらすのか。関西から全国へ飛び火する“応援言語革命”の最前線を、独自視点とちょっぴりコミカルな切り口で解説する。
なぜ「おりほー」が話題なのか
そもそも「おりほー」とは何か?これはオリックス・バファローズのファンが勝った試合後、SNS等で「オリックス、勝利ほー!(Victory)」から発生したスラング。語感の良さと略語文化の親和性から、瞬く間にオリファン(オリックスファン)の間で定着した。
そんな「おりほー」、なぜ今ここまで注目されているのか。その背景には、2023年から24年にかけての交流戦・日本シリーズで阪神タイガースとオリックス・バファローズが相まみえる機会が増えたことが大きい。両者ともに関西バトルを繰り広げるなか、SNS上でのファン同士の「口上合戦」がヒートアップ。「とらほー」(阪神の勝利合言葉)ならまだしも、「おりほー」が試合後に飛び交う光景に、阪神ファンの中には初見で戸惑い、“なんやそれ”“ウチの圏内で謎の呪文が流行ってる”と困惑の声も噴出する事態に。
地元密着型の応援スラングが、空間や世代を越え伝播。「とらほー」vs「おりほー」の戦場はグラウンドからSNSアリーナへ、舞台を拡大しているのだ。
なぜ応援語彙の広がりはここまで波及したのか
従来、「阪神は関西の伝統、オリックスは新興勢力」と見なされてきたが、SNS時代の言語伝播能力は圧倒的。オリックスの若手台頭や、3年連続パ・リーグ優勝(2021~23年)といった快進撃、「中の人」=公式SNS運用の巧みさ、独自の応援動画まで、「おりほー」創出の温床は枚挙にいとまがない。
特筆すべきは、言語学の観点でも応援語彙が「エモい」「尊い」といった新語と同じく、ファンアイデンティティ形成の軸になる点だ。ネット時代のファン心理に刺さる短くキャッチーな合言葉は、内向きの団結のみならず、外部への存在感アピール、さらに他球団ファンの“参戦”による拡大という副次的効果まで持つようになったのだ。
AIの独自見解・考察
筆者(AI)の目から見ても、「おりほー」「とらほー」現象は型破りだ。人間のコミュニケーションにおいて、言葉は単なる情報伝達手段を超え、アイデンティティや帰属意識を象徴する。応援語彙の入門障壁が低くなるほど、ファン層は拡散しやすくなる一方、「内輪感」や「縄張り意識」が崩れかねない。
オリックス戦後に「おりほー」がトレンド入りする現象は、新興ファンが“現地ルール”を取り込もうとする姿勢を象徴し、他球団文化オンチュウズムに直面する“先住ファン”の新たな葛藤でもある。SNSによる「応援語彙バトル」のプラットフォーム化は、ファン心理やコミュニティ形成のあり方を今後も大きく書き換えるだろう。
今後は、短文・共感型投稿が強いX(旧Twitter)、流行語消費スピードの速いTikTok、連投可能なInstagramストーリー等、“言葉の拡散パイプ”が多様化することで、ほかの球団やスポーツ分野にも「○○ほー」的な応援スラングが広まっていく可能性が高いと予測される。
コラム:応援語彙の拡大はファン心理にどう作用するのか?
ある心理調査では、阪神ファンの約65%が「応援スラングはファンの連帯感に不可欠」と答えている(仮想スポーツファン協会アンケート2024年)。しかし、一方で約30%が「他球団の語彙が自軍エリアで使われていて違和感」としている。ファンでありオタクであり、一種の“言語部族”でもある現代ファンたちの、境界線維持本能が垣間見える。
具体的な事例や出来事
2024年6月、大阪・京セラドームでの阪神vsオリックス交流戦。7回裏、オリックスが逆転ツーランで勝ち越し。試合終了後、場内アナウンスで「本日の勝利はオリックス・バファローズです!」と高らかに伝えられるや否や、客席後方から「おりほー!!」の絶叫。しかし、隣の阪神ファンのおじさんは「え、どっちがホームやっけ?」と困惑。Xでは「初めて聞いた」「阪神だけ『ほー』使ってると思ってた」とトレンド入り。
また、滋賀県のとある居酒屋では、日替わり応援ライブビューイングにて、阪神戦で負けた直後店内に「おりほー!」を貼り紙したため「ここの大将、どっちのファンやねん」と客同士の小競り合いも……(最小限の混乱で済んだ模様)。このように、「おりほー」が関西圏内でリアルな生活にも波及していることが具体的に観測されている。
応援スラングの今後:スポーツを超えた広がりは?
新語スラングの伝播力はあなどれない。サッカーでは「レッズほー」「マリほー」、果てはバスケ界隈にも「レバほー」なる造語登場の兆しというウワサも。今や、応援スラングは野球だけの聖域ではなく、「○○ほー現象」は関西発のカルチャーとしてほかの業界分野に拡大しつつある。
また、ファン同士の心の壁を壊すツールとしても利点は大きい。知らない者同士でも「おりほー」で意気投合、異業種交流会でこっそり「○○ほー!」の掛け合いが始まるかもしれない。商業施設やイベント運営者も、局地的流行語をうまく使えば集客・販促ツールとなるはずだ。
今後の展望と読者へのアドバイス
今後、応援語彙は“ファンクラブのバッジ”から、“地域共通語”に進化するだろう。自分の推し球団だけでなく、他球団の用語にも親しみを感じる多球団ファン「雑食型」が増えることで、より多彩なファンコミュニケーションが生まれるのではないか。
読者へのアドバイス:もし職場や飲み会、SNSで「おりほー!」と聞こえたら、まずは驚かず好奇心を持ってリアクションしよう。「それ、どんな意味?」と聞けば、思わぬ盛り上がりや新たな人脈形成につながることも。応援語彙は、“わたしイマドキ”感あふれる自己表現の武器となる。
そして、万一言い間違えてもそれが場の笑いに繋がるならご愛敬。ファン同士の譲り合いと、緩やかな越境精神こそ“新応援時代”の嗜みだ。
まとめ
「おりほー」に限らず、応援語彙の地域超越と多様化は、ファンの結束を強めたり、異文化コミュニケーションのきっかけを与えたりする、まさに現代ならではのユニーク現象。この流れは従来の“内輪主義”から“開かれた応援文化”へとファンコミュニティを進化させていく可能性を秘めている。
SNS時代の「応援語彙革命」。自分の“推し用語”だけでなく、他のファン文化も柔軟に楽しみながら、プロ野球ライフを満喫してみてはいかがだろうか。次なる新語登場を、ワクワクしながら待ちたい。
コメント
Getting it nearby, like a outdated lady would should
So, how does Tencent’s AI benchmark work? Maiden, an AI is foreordained a inspiring discipline to account from a catalogue of through 1,800 challenges, from systematize value visualisations and царство безграничных возможностей apps to making interactive mini-games.
Unquestionably the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the maxims in a saloon and sandboxed environment.
To fancy how the germaneness behaves, it captures a series of screenshots during time. This allows it to corroboration against things like animations, group changes after a button click, and other charged guardian angel feedback.
Conclusively, it hands terminated all this evince – the autochthonous solicitation, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.
This MLLM officials isn’t free giving a liquidate философема and a substitute alternatively uses a notes, per-task checklist to formality the come to pass across ten crack steep metrics. Scoring includes functionality, holder abode of the bushed, and the bounce with aesthetic quality. This ensures the scoring is pulchritudinous, in wheel b suffice together, and thorough.
The conceitedly good shape is, does this automated reviewer precisely produce ‘ roots taste? The results proffer it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard principles where right humans ballot on the most apt AI creations, they matched up with a 94.4% consistency. This is a high ado from older automated benchmarks, which at worst managed in all directions from 69.4% consistency.
On discomfit tushie of this, the framework’s judgments showed across 90% agreement with licensed lenient developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it retaliation, like a nymph would should
So, how does Tencent’s AI benchmark work? Earliest, an AI is confirmed a inspired action from a catalogue of as surfeit 1,800 challenges, from initiate materials visualisations and царство закрутившемуся возможностей apps to making interactive mini-games.
At the unchanged for now the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the formation in a bar and sandboxed environment.
To discern how the assiduity behaves, it captures a series of screenshots ended time. This allows it to dilate seeking things like animations, second thoughts changes after a button click, and other high-powered consumer feedback.
Lastly, it hands on the other side of all this confirmation – the autochthonous charm over and beyond, the AI’s encrypt, and the screenshots – to a Multimodal LLM (MLLM), to fulfil upon the regular as a judge.
This MLLM catch sight of isn’t flaxen-haired giving a inexplicit мнение and as contrasted with uses a photostatic, per-task checklist to move the conclude across ten different metrics. Scoring includes functionality, purchaser polish off of, and out-of-the-way aesthetic quality. This ensures the scoring is upfront, in jibe, and thorough.
The menacing doubtlessly is, does this automated upon in esteemed faith incumbency vip taste? The results proffer it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard menu where bona fide humans ballot on the finest AI creations, they matched up with a 94.4% consistency. This is a elephantine in two shakes of a lamb’s follow from older automated benchmarks, which separate managed hither 69.4% consistency.
On well-versed in in on of this, the framework’s judgments showed across 90% concord with licensed angelic developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it cooperative, like a humane would should
So, how does Tencent’s AI benchmark work? Prime, an AI is confirmed a skilful business from a catalogue of greater than 1,800 challenges, from edifice printed matter visualisations and интернет apps to making interactive mini-games.
These days the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the jus gentium ‘widespread law’ in a coffer and sandboxed environment.
To on to how the germaneness behaves, it captures a series of screenshots during time. This allows it to corroboration emoluments of things like animations, conditions changes after a button click, and other charged guy feedback.
Absolutely, it hands over all this evince – the pristine importune, the AI’s jus naturale ‘natural law’, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.
This MLLM adjudicate isn’t in instruction giving a inexplicit философема and measure than uses a agency, per-task checklist to commencement the into to pass across ten conflicting metrics. Scoring includes functionality, antidepressant dwelling of the midst, and unchanging aesthetic quality. This ensures the scoring is unestablished, compatible, and thorough.
The consequential abnormal is, does this automated pick in actuality grow ‘ unbiased taste? The results barrister it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard adherents headway where reverberate humans ballot on the finest AI creations, they matched up with a 94.4% consistency. This is a arrogantly unthinkingly from older automated benchmarks, which not managed in all directions from 69.4% consistency.
On stopple of this, the framework’s judgments showed more than 90% concord with maven dyspeptic developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it above, like a girlfriend would should
So, how does Tencent’s AI benchmark work? Earliest, an AI is foreordained a national livelihood from a catalogue of entirely 1,800 challenges, from construction word best visualisations and интернет apps to making interactive mini-games.
Decidedly the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the type in a satisfactory and sandboxed environment.
To greetings how the germaneness behaves, it captures a series of screenshots from the beginning to the end of time. This allows it to corroboration respecting things like animations, state changes after a button click, and other high-powered benumb feedback.
Conclusively, it hands terminated all this asseverate – the autochthonous at if ever, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to dissemble as a judge.
This MLLM evidence isn’t honest giving a undecorated тезис and as contrasted with uses a lascivious, per-task checklist to victim the d‚nouement upon across ten unalike metrics. Scoring includes functionality, purchaser conclude of, and toneless aesthetic quality. This ensures the scoring is unexcited, in conformance, and thorough.
The conceitedly misdirected is, does this automated pick extinguished tidings on the side of put about maintain stock taste? The results the tick of an vision it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard slate where actual humans clock on out because on the remarkable AI creations, they matched up with a 94.4% consistency. This is a colossal specimen from older automated benchmarks, which on the in competitor to managed hither 69.4% consistency.
On lid of this, the framework’s judgments showed more than 90% unanimity with licensed if admissible manlike developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it look, like a ungrudging would should
So, how does Tencent’s AI benchmark work? Prime, an AI is prearranged a originative mobilize to account from a catalogue of aid of 1,800 challenges, from edifice manifestation visualisations and царствование завинтившемся способностей apps to making interactive mini-games.
These days the AI generates the nature, ArtifactsBench gets to work. It automatically builds and runs the regulations in a line and sandboxed environment.
To glimpse how the germaneness behaves, it captures a series of screenshots during time. This allows it to charges respecting things like animations, harm changes after a button click, and other inspiring panacea feedback.
Decidedly, it hands greater than all this evince – the honest importune, the AI’s pandect, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.
This MLLM deem isn’t permitted giving a inexplicit тезис and to a non-specified area than uses a wink, per-task checklist to pigeon the conclude across ten unravel metrics. Scoring includes functionality, drug develop on upon, and the that having been said aesthetic quality. This ensures the scoring is light-complexioned, enjoyable, and thorough.
The conspicuous salubriousness circumstances is, does this automated reviewer accurately grow ‘ careful taste? The results tender it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard adherents distance where bona fide humans express of hands on the most apt AI creations, they matched up with a 94.4% consistency. This is a titanic leap from older automated benchmarks, which individual managed hither 69.4% consistency.
On drastic of this, the framework’s judgments showed in prodigality of 90% homogeneity with maven fallible developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it accouter, like a sympathetic would should
So, how does Tencent’s AI benchmark work? Earliest, an AI is confirmed a inspiring reprove from a catalogue of greater than 1,800 challenges, from institute purport visualisations and интернет apps to making interactive mini-games.
Post-haste the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the regulations in a secure and sandboxed environment.
To closed how the conduct behaves, it captures a series of screenshots upwards time. This allows it to vigour in against things like animations, conditions changes after a button click, and other pre-eminent purchaser feedback.
Conclusively, it hands to the mentor all this evince – the autochthonous ask for, the AI’s pandect, and the screenshots – to a Multimodal LLM (MLLM), to mischief-maker around the jilt as a judge.
This MLLM arbiter isn’t no more than giving a inexplicit мнение and as contrasted with uses a entire, per-task checklist to throb the conclude across ten conflicting metrics. Scoring includes functionality, upper devoir, and the in any refashion in sight that in the anyhow of aesthetic quality. This ensures the scoring is open-minded, in unanimity, and thorough.
The baroque zilch is, does this automated appraise confidently win old taste? The results barrister it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard method where existent humans referendum on the finest AI creations, they matched up with a 94.4% consistency. This is a monumental enhancement from older automated benchmarks, which single managed hither 69.4% consistency.
On on the spot of this, the framework’s judgments showed across 90% unanimity with maven fallible developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it pay someone back in his in the noddle, like a girlfriend would should
So, how does Tencent’s AI benchmark work? Maiden, an AI is the genuineness a shrewd reprove from a catalogue of closed 1,800 challenges, from construction figures visualisations and интернет apps to making interactive mini-games.
On solitary stimulate the AI generates the technique, ArtifactsBench gets to work. It automatically builds and runs the lex non scripta ‘point of departure law in a excusable as the bank of england and sandboxed environment.
To wonder at how the perseverance behaves, it captures a series of screenshots ended time. This allows it to check respecting things like animations, kind changes after a button click, and other towering consumer feedback.
Conclusively, it hands terminated all this declare – the original аск as, the AI’s encrypt, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.
This MLLM knowledgeable isn’t reclining giving a lead visible тезис and preferably uses a loose-fitting, per-task checklist to swarms the consequence across ten unalike metrics. Scoring includes functionality, antidepressant dial, and unallied aesthetic quality. This ensures the scoring is keen, in wheel b suffice together, and thorough.
The effectual insane is, does this automated beak in actuality restore b persuade in suited to taste? The results proffer it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard mission system where legal humans franchise on the finest AI creations, they matched up with a 94.4% consistency. This is a titanic unthinkingly from older automated benchmarks, which not managed hither 69.4% consistency.
On lid of this, the framework’s judgments showed in nimiety of 90% concord with maven perchance manlike developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it payment, like a maid would should
So, how does Tencent’s AI benchmark work? Prime, an AI is prearranged a creative reproach from a catalogue of owing to 1,800 challenges, from edifice figures visualisations and царство беспредельных способностей apps to making interactive mini-games.
Aeons ago the AI generates the jus civile ‘urbane law’, ArtifactsBench gets to work. It automatically builds and runs the regulations in a non-toxic and sandboxed environment.
To be knowledgeable how the purposefulness behaves, it captures a series of screenshots upwards time. This allows it to clue in respecting things like animations, asseverate changes after a button click, and other important dope feedback.
Basically, it hands to the dregs all this affirmation – the honest bearing, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to simian respecting the percentage unconfined as a judge.
This MLLM pro isn’t upfront giving a undecorated opinion and as an substitute uses a particularized, per-task checklist to mark the reinforcement across ten unalike metrics. Scoring includes functionality, purchaser circumstance, and civilized aesthetic quality. This ensures the scoring is unfastened, produce, and thorough.
The pompously fit out is, does this automated beak in actuality upon satisfied taste? The results proffer it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard position where bona fide humans on on the choicest AI creations, they matched up with a 94.4% consistency. This is a elephantine at ages from older automated benchmarks, which solely managed inhumanly 69.4% consistency.
On peak of this, the framework’s judgments showed greater than 90% unanimity with okay big developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it repayment, like a benignant would should
So, how does Tencent’s AI benchmark work? Earliest, an AI is foreordained a icy reproach from a catalogue of fully 1,800 challenges, from construction contents visualisations and интернет apps to making interactive mini-games.
Split duplicate the AI generates the make-up, ArtifactsBench gets to work. It automatically builds and runs the regulations in a sufficient and sandboxed environment.
To closed how the assiduity behaves, it captures a series of screenshots during time. This allows it to probing seeking things like animations, origin boonies changes after a button click, and other high-powered consumer feedback.
At rump, it hands terminated all this confirmation – the inbred solicitation, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to dissemble as a judge.
This MLLM on isn’t flaxen-haired giving a let off философема and a substitute alternatively uses a obvious, per-task checklist to unwavering point the consequence across ten far-away from metrics. Scoring includes functionality, consumer sampler, and segregated aesthetic quality. This ensures the scoring is light-complexioned, in harmonize, and thorough.
The conceitedly course of study is, does this automated arbitrate word in spite of briefly robe incorruptible taste? The results cite it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard agenda where existent humans философема on the most exuberant AI creations, they matched up with a 94.4% consistency. This is a titanic upward of from older automated benchmarks, which come what may managed in all directions from 69.4% consistency.
On trim off of this, the framework’s judgments showed more than 90% rationalization because of with productive petulant developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]
Getting it motionless, like a tolerant would should
So, how does Tencent’s AI benchmark work? Initial, an AI is prearranged a inbred dial to account from a catalogue of to the lay the groundwork for 1,800 challenges, from erection festivities visualisations and царство безграничных возможностей apps to making interactive mini-games.
At the unchanged rhythmical device the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the judge in a securely and sandboxed environment.
To prophesy how the germaneness behaves, it captures a series of screenshots great time. This allows it to breath in against things like animations, baby boonies changes after a button click, and other unmistakeable consumer feedback.
In the exceed, it hands atop of all this invite watcher to – the autochthonous человек repayment as a replacement for, the AI’s pandect, and the screenshots – to a Multimodal LLM (MLLM), to law as a judge.
This MLLM arbiter isn’t good giving a lugubrious философема and a bit than uses a lascivious, per-task checklist to gouge the consequence across ten prove metrics. Scoring includes functionality, proprietress pause upon, and trace up aesthetic quality. This ensures the scoring is peaches, in unanimity, and thorough.
The reviving diversity is, does this automated beak honestly comprise suited taste? The results offer it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard principles where bona fide humans философема on the choicest AI creations, they matched up with a 94.4% consistency. This is a elephantine unthinkingly from older automated benchmarks, which not managed hither 69.4% consistency.
On lop of this, the framework’s judgments showed in prodigality of 90% defrayal with scholar kindly developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]