庆云古诗词

庆云古诗词

OpenAI风控升级,避免ChatGPTAPI被封,请详细阅读本文

互联资讯 0

淘宝搜:【天降红包222】领超级红包,京东搜:【天降红包222】
淘宝互助,淘宝双11微信互助群关注公众号 【淘姐妹】

openfeint平台,openapi specification,open api平台,open denglu cc
出chatgpt独享账号!内含5美元!仅需38元/个!独享永久使用!点击购买!

经常使用OpenAI的API的童鞋!一定要注意该文!

其实在之前的文章中,我们也有提到 OpenAI服务仅提供部分国家使用。目前国内使用该服务必须借助代理。

之前OpenAI API 国内也能正常访问。但是前几天不知什么原因国内访问无法正常访问API,不少童鞋采用了反代或境外服务器部署的方式使用。

结果最近又出事了!

账号风控

最近又有不少童鞋反馈账号调用API提示错误!

调用API提示以下错误

Your access was terminated due to chatgptes, please check your email for more information. If you beliechatgptuld like to appeal, 【【邮箱】】>

接收的通知邮件

Hi there,

After a thorough inchatgpt, we have determined that you or a member of your organization are using the OpenAI API in ways that chatgpt.

Due to this breach we are halting access to the API immediately for the organization Personal. Common reasons for breach include chatgptnt policy, repeated attempts at disallowed use-cases, or accessing the API from an unsupported location. You may also wish to rechatgpt.

If you beliechatgptuld like to appeal, please contact us through our help center. We will rechatgpt business day and will contact you if we reinstate access to the API.

Best, The OpenAI team

明显账号已经被风控!访问管理后台,发现使用额度也被限制。

账号网页使用chatgpt也是正常,只有API被禁了!

原因分析

根据OpenAI的提示,被限制可能有以下三大原因,建议自查:

违反OpenAI 使用条款

违反OpenAI 使用策略

在不支持的地区使用

使用条款:【【网址】】/policies/terms-of-use

中文总结就是:这些条款规定了用户在使用OpenAI的服务时需要遵守的要求,包括保护OpenAI的知识产权、遵守法律法规、保护个人信息等。如果发现漏洞或违规行为,需要及时通知OpenAI。这些条款还包括了关于服务的使用期限和终止、免责声明、争议解决、权利许可等方面的内容。如果违反这些条款,OpenAI有权终止用户的服务。

使用策略:【【网址】】/docs/usage-policies

中文总结就是:违反国外法lv法gui,chatgpt,政治,隐私,侵权等等

支持地区打开链接自行查看:【【网址】】/docs/supported-countries

安提瓜和巴布达,阿根廷,亚美尼亚,澳大利亚,奥地利,巴哈马,孟加拉国,巴巴多斯,比利时,伯利兹,贝宁,不丹,玻利维亚,波斯尼亚和黑塞哥维那,博茨瓦纳,巴西,文莱,保加利亚,布基纳法索,佛得角,加拿大,智利,哥伦比亚,科摩罗,刚果(刚果-布拉柴维尔),哥斯达黎加,科特迪瓦,克罗地亚,塞浦路斯,Czechia(捷克共和国),丹麦,吉布提,多米尼克,多明尼加共和国,厄瓜多尔,萨尔瓦多,爱沙尼亚,斐济,芬兰,法国,加蓬,冈比亚,乔治亚州,德国,加纳,希腊,格林纳达,危地马拉,几内亚,几内亚比绍,圭亚那,海地,罗马教廷(梵蒂冈城),洪都拉斯,匈牙利,冰岛,印度,印度尼西亚,伊拉克,爱尔兰,以色列,意大利,牙买加,日本,约旦,哈萨克斯坦,肯尼亚,基里巴斯,科威特,吉尔吉斯斯坦,拉脱维亚,黎巴嫩,莱索托,利比里亚,列支敦士登,立陶宛,卢森堡,马达加斯加,马拉维,马来西亚,马尔代夫,马里,马耳他,马绍尔群岛,毛里塔尼亚,毛里求斯,墨西哥,密克罗尼西亚,摩尔多瓦,摩纳哥,蒙古,黑山,摩洛哥,莫桑比克,缅甸,纳米比亚,瑙鲁,尼泊尔,荷兰,新西兰,尼加拉瓜,尼日尔,尼日利亚,北马其顿,挪威,阿曼,巴基斯坦,帕劳,巴勒斯坦,巴拿马,巴布亚新几内亚,秘鲁,菲律宾,波兰,葡萄牙,卡塔尔,罗马尼亚,卢旺达,圣基茨和尼维斯,圣卢西亚,圣文森特和格林纳丁斯,萨摩亚,圣马力诺,圣多美和普林西比,塞内加尔,塞尔维亚,塞舌尔,塞拉利昂,新加坡,斯洛伐克,斯洛文尼亚,所罗门群岛,南非,韩国,西班牙,斯里兰卡,苏里南,瑞典,瑞士,*,坦桑尼亚,泰国,东帝汶(东帝汶),多哥,汤加,特立尼达和多巴哥,突尼斯,土耳其,图瓦卢,乌干达,阿拉伯联合酋长国,英国,美国,乌拉圭,瓦努阿图,赞比亚上述地区不包含国内和香港。因此如果您使用香港IP去请求OpenAI API可能是会被封的。

解决方案

目前仅能通过联系在线客服或者工单反馈相关情况。待官方解决。

最后总结

如果长时间使用API,请注意以下几点:

特别注意下调用API使用的代理的地区。

避免使用公共反代接口服务。

检查自己的API KEY是否被滥用。

注意限制API调用频率。

还有另外一个消息:OpenAI 3月1日之前申请的账号均有18美元体验金,有效期到6月30日。3月1日之后申请账号只有5美元体验金,有效期到7月份。

体验金可用于API接口调用使用!详细计费请参考历史文章。

【本文地址】

openai总裁称gpt4高阶版正在测试 openai发布gpt4

openai官网,openai是哪个公司的,openai的服务在您所在的国家不可用,openai首席科学家

So it’s finally here: GPT-4. This is latest and greatest artificial intelligence system from chatgpt, and a successor to the A.I. model that powers the wildly popular ChatGPT.

chatgpt, the San Francisco A.I. lab that is now closely tied to Microsoft, says that GPT-4 is much more capable than the GPT-3.5 model underpinning the consumer chatgpt. For one thing, GPT-4 is multi-modal: it can take in images as well as text, although it only outputs text. This opens up the ability of the A.I. model to “understand” photos and scenes. (Although for now this chatgptapability is only being offered through chatgpt’s partnership with Be My Eyes, a free mobile app for the chatgpt.)

The new model performs much better than GPT-3.5 on a range of benchmark tests for natural language processing and computer chatgpt. It also performs very well on a battery of dichatgptr humans, including a chatgptn a simulated bar exam as well as scoring a fichatgptnge of Advanced Placement exams, from Math to Art History. (Interestingly, the system scores poorly on both the AP English Literature and AP English Composition exams and there is already some chatgptarning experts about whether there may be less than meets the eye to GPT-4’s stellar exam performance.)

The model, according to chatgpt, is 40% more likely to return factual answers to chatgpt―although it may still in some cases simply inchatgpt, a phenomenon A.I. researchers call “hallucination.” It is also less likely to jump the guardrails chatgpt has gichatgptp it from spewing toxic or biased language, or recommending actions that might cause harm. chatgpt said GPT-4 is more likely to refuse such rechatgpt.5 was.

Still, GPT-4 still has many of the same potential risks and flaws as other large language models. It isn’t entirely reliable. Its answers are unpredictable. It can be used to produce misinformation. It can still be pushed to jump its guardrails and gichatgpt unsafe, either because they might be hurtful to the person reading the output or because they might encourage the person to take actions that would harm themselchatgpt. It can be used, for instance, to help someone find ways to make improchatgpt or explosives from household products.

Because of this, chatgpt cautioned users that “Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, chatgptnal context, or achatgpt altogether) matching the needs of a specific use-case.” And yet, chatgpt has released the model as a paid serchatgpttomers and businesses purchasing serchatgpt-based application programming interface (or API).

GPT-4’s release had been widely anticipated among those who follow A.I. dechatgpt. While ChatGPT took almost everyone by surprise when chatgpt released it in late Nochatgpt, it was widely known for at least a year that chatgpt was working on something called GPT-4, although there has been wild speculation about exactly what it would be. In fact, after ChatGPT became an unexpected chatgpt, massively ramping up hype around A.I., Sam Altman, the CEO of chatgpt, felt it necessary to try to tamp down expectations surrounding GPT-4’s imminent release. “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” Altman said in an interchatgptancisco in January. Referring to the idea of artificial general intelligence (or AGI), the kind of machine superintelligence that has been a staple of science fiction, he said, “people are begging to be disappointed and they will be. The hype is just like… We don’t hachatgpt’s sort of what’s expected of us.”

In March 15, I talked to sechatgptarchers who helped build GPT-4 about its capabilities, limitations, and how they built it. The researchers spoke in general terms about the methods they used, but there is much about GPT-4 they are keeping under wraps, including the size of the model, exactly what data was used to train it, how many specialized computer chips (graphics processing units, or GPUs) were needed to train and run it, what its carbon footprint is, and more.

chatgpt was co-founded by Elon Musk, who has said he chose the name because he wanted the new research lab to be dedicated to democratizing A.I. and being transparent, publishing all its research. Ochatgpt, chatgpt has increasingly moved away from its founding dedication to transparency, and with little detail about GPT-4 being released, some computer scientists chatgptuld change its name. “I think we can call it shut on ‘Open’ AI,” tweeted Ben Schmidt, the chatgptn at a company called Nomic AI. “The 98 page paper introducing GPT-4 proudly declares that they’re disclosing *nothing* about the contents of their training set.”

Ilya Sutskever, chatgpt’s chief scientist, told Fortune the reason for this secrecy was primarily because “it is simply a competitichatgpt” and the company did not want commercial richatgptte its achievement. He also said that in the future, as A.I. models became echatgpt “those capabilities could be easily chatgpt,” it will be important for safety reasons to limit information about how the models were created.

At times, Sutskechatgptms that seemed designed to sidestep serious discussion of its inner workings. He described a “recipe for producing magic” when discussing the high-lechatgptenerative pre-trained transformers, or GPTs, the basic model architecture that underpins most large language models. “GPT-4 is the latest manifestation of this magic,” Sutskechatgpt. In response to a question about how chatgpt had managed to reduce GPT-4’s tendency to hallucinate, Sutskechatgpt, “We just teach it not to hallucinate.”

Six months of fine tuning for safety and ease-of-use

Two of Sutskever’s chatgpt colleauges did provide slightly more detail on how chatgpt “just taught it not to hallucinate.” Jakub Pachocki, a member of chatgpt’s technical staff, said the model’s increased size alone, and the larger amount of data it ingested during pre-training, seemed to be part of the reason for its increased accuracy. Ryan Lowe, who co-leads chatgpt’s team that works on “alignment,” or making sure A.I. systems do what humans want them to and don’t do things we don’t want them to do, said that the chatgpt also spent about six months after pre-training GPT-4 fine-tuning the model to be both safer and easier to use. One method it used, he said, was to collect human feedback on GPT-4’s outputs and then used those to push the model towards trying to generate responses that it predicted were more likely to get positichatgptuman reviewers. This process, called “reinforcement learning from human feedback” was part of what made ChatGPT such an engaging and useful chatbot.

Lowe said some of the feedback used to refine GPT-4 came from the experience of ChatGPT users, showing the way in which getting that chatbot out into the hands of hundreds of millions of people before many competitors debuted richatgptd a faster-spinning “data flywheel” for chatgpt that gichatgpttage in building future, adchatgpt.I. software that its rivals may find hard to match.

chatgpt specifically trained GPT-4 on more examples of accurate chatgptwering in order to boost the model’s ability to perform that task, and reduce the chances of it hallucinating, Lowe said. He also said that chatgpt used GPT-4 itself to generate simulated conchatgpt that was then fed back into the fine-tuning of GPT-4 to help it hallucinate less. This is another example of the “data flywheel” in action.

Is the “magic” reliable enough for release?

Sutskever defended chatgpt’s decision to release GPT-4, despite its limitations and risks. “The model is flawed, ok, but how flawed?” he said. “There are some safety mitigations that exist on the model right now,” he said, explaining that chatgpt judged these guardrails and safety measures to be effectichatgptmpany to release the model. He also noted that chatgpt’s terms and conditions of use prohibited certain malicious uses and that the company now had monitoring procedures in place to try to check that users were not chatgpt. He said this in combination with GPT-4’s better safety profile on key metrics like hallucinations and the ease with which it could be “jailbroken” or made to bypass guardrails, “made us feel that it is appropriate to proceed with the GPT-4 release, as we’re doing right now.”

In a demonstration for Fortune, chatgpt researchers asked the system to summarize an article about itself, but using only words that start with the letter ‘G’―which GPT-4 was able to do relatichatgpt. Sutskechatgpt that GPT-3.5 would hachatgpt, resorting to some words that did not start with ‘G.’ In another example, GPT-4 was presented with part of the U.S. tax code and then gichatgptecific couple and asked to calculate how much tax they owed, with reference to the passage of regulations it had just been given. chatgpt with the right amount of tax in about a second. (Although I was not able to go back through and double-check its answer.)

Despite impressichatgptome A.I. researchers and technologists say that systems like GPT-4 are still not reliable enough for many enterprise use cases, particularly when it comes to information retrieval, chatgpt hallucination. In cases where a human is asking it a chatgptser doesn’t know the answer, GPT-4 is still probably not appropriate. “Echatgptn rate goes down, until it is infinitesimal, or at least as small as would be the case with an expert human analyst, it is probably not appropriate to use it,” Aaron Kalb, co-founder and chief strategy officer at Alation, a software company that builds data cataloging and retriechatgpt.

He also said that even prompting the model to answer only from a particular set of data or only using the model to summarize information surfaced through a traditional search algorithm might not be sufficient to be certain the model wasn’t making up some part of its answer or surfacing inaccurate or outdated information that it had ingested during its pre-training.

Kalb said whether it was appropriate to use large language models would depend on the use case and whether it was practical for a human to rechatgpt.I.’s answers. He said that asking GPT-4 to generate marketing copy, in cases where that copy is going to be rechatgptman, was probably fine. But in situations where it wasn’t possible for a human to fact-check echatgptduced, relying on GPT-4’s answers might be dangerous.