Meta is looking for ASIC engineers to help the company build its data center accelerators and system-on-a-chips (SoC).

According to a report in The Register, the Facebook, Instagram, and WhatsApp parent company has posted job adverts seeking workers with expertise in architecture, design, and testing in Bangalore, India, and Sunnyvale, California.

Meta office
Meta wants more ASIC engineers – Getty Images

The Bangalore-based architecture engineer role calls for someone with “more than 10 years of experience and knowledge of computer architecture concepts” to “work on advanced architecture, algorithms and models targeting machine learning solutions.”

Other responsibilities include analyzing and mapping data center workloads to ASIC architecture and implementing and analyzing algorithms for data center machine learning accelerators.

The design engineer role requires a minimum of seven years of silicon development expertise and successful candidates will be responsible for micro-architecture development, register-transfer level (RTL) development, and collaborating with teams for timing, planning development, and debugging.

Both roles say “equivalent practical experience” will be considered in lieu of professional qualifications in computer science, computer engineering, or other relevant technical fields.

Although the job adverts don’t reference any specific project, earlier this month it was reported that Meta was hoping to reduce its reliance on Nvidia by deploying an updated version of its own AI-focused custom chips into its data centers this year.

First reported to be in development in 2023, the chips, dubbed the Meta Training and Inference Accelerator, are based on 7nm nodes and provide 102 Tops of Integer (8-bit) accuracy computation or 51.2 teraflops of FP16 accuracy computation. The chips run at 800 megahertz and are about 370 millimeters square.

Meta was originally expected to roll out its in-house chips in 2022 but scrapped the plan after they failed to meet internal targets, with the shift from CPUs to GPUs for AI training forcing the company to redesign its data centers and cancel multiple projects.