OpenAI is hiring for a team to "co-design future hardware from different vendors for programmability and performance."
In a job listing for the 'HW/SW Co-design Engineer,' the generative artificial intelligence company said that the kernels team would "support important AI accelerators and we with hardware/software co-design of future AI accelerators."
To do so, the new hire will "work with our kernel, compiler and [machine learning] ML developers to understand their needs related to ML techniques, algorithms, numerical approximations, programming expressivity and compiler optimizations," the job listing states.
"You will evangelize these constraints with various vendors to develop future hardware architectures amenable for efficient training and inference. If you are excited about maximizing HBM bandwidth, optimizing for low arithmetic intensity, expressive SIMD ISA, low-precision formats, optimizing for memory hierarchies, simulating workloads at various resolutions of the hardware and evangelizing these ideas with hardware engineers, this is the perfect opportunity!"
Candidates are expected to have a "deep understanding of GPU and/or other AI accelerators," and "have good experience with [Nvidia's] CUDA or a related accelerator programming language."
The employee will work alongside OpenAI's Hardware Design Team, which DCD last year exclusively reported is led by Richard Ho, previously at photonic chip company Lightmatter and the former lead of Google's TPU AI chip family.
That team is still hiring for a deep learning hardware/software co-design engineer, which DCD previously reported will be part of a team that will "include experts in hardware design as well as data center facility design."
OpenAI currently uses Microsoft's Azure cloud as part of a $13bn investment the tech giant made in the AI lab last year. This investment mostly came in the form of cloud credits. The ChatGPT developer has pledged to help inform Microsoft's AI chip development, including the Maia AI Accelerator it announced in November last year.
The startup is also believed to have considered building its own chip hardware, and has evaluated a potential acquisition target. CEO Sam Altman has separately been trying to raise billions for his own chip company, and is reportedly seeking as much as $7 trillion.
OpenAI previously promised to buy $51m in chips from neuromorphic chip company Rain AI - which Altman is an investor in.
For now, however, OpenAI primarily relies on Nvidia's GPUs to develop its large language models - helping propel the chip company's valuation to more than $2 trillion.