Close

Presentation

Software Engineer, Senior Staff - Kernels
·
d-Matrix Corporation
·
Santa Clara, Ca
DescriptionThe role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of software kernels for next-generation AI hardware. You possess experience building software kernels for HW architectures. You possess a very strong understanding of various hardware architectures and how to map algorithms to the architecture. You understand how to map computational graphs generated by AI frameworks to the underlying architecture. You have had past experience working across all aspects of the full stack toolchain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design. You can build and scale software deliverables in a tight development window. You will work with a team of compiler experts to build out the compiler infrastructure, working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company.
RequirementsWhat you will bring: MS or PhD in Computer Engineering, Math, Physics or related degree with 5+ years of industry experience. Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals. Proficient in C/C++ and Python development in Linux environment and using standard development tools. Experience implementing algorithms in high-level languages such as C/C++ and Python. Experience implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, and AI accelerators using libraries such as CuDA, etc. Experience in implementing operators commonly used in ML workloads - GEMMs, Convolutions, BLAS, SIMD operators for operations like softmax, layer normalization, pooling, etc. Experience with development for embedded SIMD vector processors such as Tensilica. Self-motivated team player with a strong sense of ownership and leadership. Preferred: Prior startup, small team, or incubation experience. Experience with ML frameworks such as TensorFlow and or PyTorch. Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc. Experience with a deep learning framework (such as PyTorch or Tensorflow) and ML models for CV, NLP, or Recommendation. Work experience at a cloud provider or AI compute / sub-system company.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru. Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
Registration Categories
TP
W
TUT
XO/EX