Jobs
Company
In-Person / Remote
Tuesday, 19 November 2024 10:30am-3pm Exhibit Hall A3 - Job Fair Inside
Talent Acquisition Lead
·
Argonne National Laboratory
·
Chicago, IL
Hide Details
SessionJob Postings
DescriptionWe are seeking a highly motivated Postdoctoral Appointee with a strong background in AI/ML, specifically in the development and application of Large Language Models (LLMs) tailored for scientific use cases. This position is focused on advancing the capabilities of LLMs to address complex problems within specific scientific domains, with an emphasis on climate risk assessment and analysis.
As part of a multidisciplinary team, the Postdoctoral Appointee will work at the intersection of AI/ML, climate science, and high-performance computing. The candidate will develop LLMs specifically designed to understand, process, and analyze scientific data specifically related to climate risks, including extreme weather events, long-term environmental changes, and their impact on infrastructure and ecosystems.
This role requires not only expertise in LLMs and machine learning but also an understanding of the unique challenges posed by scientific data, which often includes large-scale numerical datasets, complex simulations, and multimodal information. This position offers the opportunity to work with some of the world’s most advanced computing resources, including Exascale supercomputers, and to collaborate with leading experts across a range of disciplines. Those who are passionate about using cutting-edge AI to address some of the most critical challenges facing our planet are encouraged to apply.
Key Responsibilities:
• Optimize Retrieval-Augmented Generation (RAG) techniques to improve the relevance and contextual accuracy of LLM-generated content.
•Explore and apply multimodal LLMs capable of effectively processing and integrating scientific data from diverse sources, including numerical tables, text, and images.
• Design and implement LLM guardrails to enhance the reliability and accuracy of model outputs in scientific applications.
• Develop and refine automatic evaluation techniques that enable the continuous assessment of LLM performance, particularly in terms of accuracy, relevance, and robustness in scientific contexts.
• Implement conformal prediction and uncertainty quantification techniques to provide reliable risk assessments and uncertainty estimates in LLM applications.
• Present research findings at national and international conferences.
As part of a multidisciplinary team, the Postdoctoral Appointee will work at the intersection of AI/ML, climate science, and high-performance computing. The candidate will develop LLMs specifically designed to understand, process, and analyze scientific data specifically related to climate risks, including extreme weather events, long-term environmental changes, and their impact on infrastructure and ecosystems.
This role requires not only expertise in LLMs and machine learning but also an understanding of the unique challenges posed by scientific data, which often includes large-scale numerical datasets, complex simulations, and multimodal information. This position offers the opportunity to work with some of the world’s most advanced computing resources, including Exascale supercomputers, and to collaborate with leading experts across a range of disciplines. Those who are passionate about using cutting-edge AI to address some of the most critical challenges facing our planet are encouraged to apply.
Key Responsibilities:
• Optimize Retrieval-Augmented Generation (RAG) techniques to improve the relevance and contextual accuracy of LLM-generated content.
•Explore and apply multimodal LLMs capable of effectively processing and integrating scientific data from diverse sources, including numerical tables, text, and images.
• Design and implement LLM guardrails to enhance the reliability and accuracy of model outputs in scientific applications.
• Develop and refine automatic evaluation techniques that enable the continuous assessment of LLM performance, particularly in terms of accuracy, relevance, and robustness in scientific contexts.
• Implement conformal prediction and uncertainty quantification techniques to provide reliable risk assessments and uncertainty estimates in LLM applications.
• Present research findings at national and international conferences.
RequirementsPosition requirements:
• Recently completed PhD (typically within the last 0-5 years, or to be awarded in 2024) in computer science, applied mathematics, or a closely related field.
• Strong programming skills in Python, or other relevant languages used in AI/ML.
• Significant knowledge in machine learning (ML) and applied mathematics.
• Ability to conduct independent research and demonstrated publication record in peer-reviewed conferences and journals.
• Innovative thinking and problem-solving skills in tackling complex scientific challenges.
• Collaborative skills, including working well with other scientists, divisions, laboratories, and universities.
• Effective oral and written communication skills with all levels of the organization.
• Ability to model Argonne's core values of impact, respect, safety, integrity, and teamwork.
Desired skills and experience:
• Experience in large language models and their applications in scientific domains.
• Expertise in developing and using machine learning potentials for climate risk analysis and prediction.
Company DescriptionAt Argonne, we view the world from a different perspective. Our scientists and engineers conduct world-class research in clean energy, the environment, technology, national security and more. We’re finding creative ways to prepare the world for a better future.
·
·
2024-08-30
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
Argonne National Laboratory
In-person
Full Time
2025 Luis W. Alvarez and Admiral Grace M. Hopper Postdoctoral Fellowships in Computing Sciences
·
Lawrence Berkeley National Laboratory
·
Berkeley, CA 94720
Hide Details
SessionJob Postings
Description2025 Luis W. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences — 102564
Division: AC-Computing
Luis W. Alvarez Postdoctoral Fellowship and Admiral Grace M. Hopper Postdoctoral Fellowship in Computing Sciences
The Computing Sciences Area (https://cs.lbl.gov/) at Lawrence Berkeley National Laboratory (https://www.lbl.gov) is now accepting applications for two distinguished postdoctoral fellowships in Computing Sciences:
• Luis W. Alvarez Postdoctoral Fellowship, and
• Admiral Grace M. Hopper Postdoctoral Fellowship.
Researchers in computer science, mathematics, data science, or any computational science discipline who have received their PhD no earlier than January 1, 2022, but no later than September 30, 2025, are encouraged to apply. Only one (1) application is needed and it will be considered for both postdoctoral fellowships.
The successful candidates will participate in research activities in computer science, mathematics, data science, or any computational science discipline of interest to the Computing Sciences Area and Berkeley Lab.
Alvarez Fellows apply advances in computer science, mathematics, computational science, data science, machine learning or AI to computational modeling, simulations, and advanced data analytics for scientific discovery in materials science, biology, astronomy, environmental science, energy, particle physics, genomics, and other scientific domains.
Hopper Fellows concentrate on the development and optimization of scientific and engineering applications leveraging high-speed network capability provided by the Energy Sciences Network or run on next-generation high performance computing and data systems hosted by the National Energy Research Scientific Computing Center at Berkeley Lab.
Since its founding in 2002, Berkeley Lab’s Luis W. Alvarez Postdoctoral Fellowship (go.lbl.gov/alvarez) has cultivated exceptional early-career scientists who have gone on to make outstanding contributions to computer science, mathematics, data science, and computational sciences. The Admiral Grace M. Hopper Postdoctoral Fellowship (go.lbl.gov/hopper) was first awarded in 2015 with the goal of enabling early-career scientists to make outstanding contributions in computer science and high performance computing (HPC) research.
About Computing Sciences at Berkeley Lab:
Whether running extreme-scale simulations on a supercomputer or applying machine-learning or data analysis to massive datasets, scientists today rely on advances in and integration across applied mathematics, computer science, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences Area researches, develops, and deploys new tools and technologies to meet these needs and to advance research in our core capabilities of applied mathematics, computer science, data science, and computational science. In addition to fundamental advances in our core capabilities, we impact such areas as astrophysics and cosmology, accelerator physics, chemical science and materials science, combustion, fusion energy, nuclear physics, biology, climate change, and HPC systems and network technology. Research areas in Computing Sciences include but are not limited to:
• Developing scientific applications and software technologies for extreme-scale and energy-efficient computing
• Developing mathematical modeling for complex scientific problems
• Designing algorithms to improve the performance of scientific applications
• Researching digital and post-digital computer architectures for science
• Developing and advancing extreme-scale scientific data management, analysis, and visualization
• Developing and advancing next-generation machine learning, AI, and data science approaches for science
• Advancing quantum computing and networking technologies, software, algorithms and applications
• Evaluating or developing new and promising HPC systems and networking technologies
• Researching methods to control and manage next-generation networks
• Managing scientific data and workflows in distributed environments
Division: AC-Computing
Luis W. Alvarez Postdoctoral Fellowship and Admiral Grace M. Hopper Postdoctoral Fellowship in Computing Sciences
The Computing Sciences Area (https://cs.lbl.gov/) at Lawrence Berkeley National Laboratory (https://www.lbl.gov) is now accepting applications for two distinguished postdoctoral fellowships in Computing Sciences:
• Luis W. Alvarez Postdoctoral Fellowship, and
• Admiral Grace M. Hopper Postdoctoral Fellowship.
Researchers in computer science, mathematics, data science, or any computational science discipline who have received their PhD no earlier than January 1, 2022, but no later than September 30, 2025, are encouraged to apply. Only one (1) application is needed and it will be considered for both postdoctoral fellowships.
The successful candidates will participate in research activities in computer science, mathematics, data science, or any computational science discipline of interest to the Computing Sciences Area and Berkeley Lab.
Alvarez Fellows apply advances in computer science, mathematics, computational science, data science, machine learning or AI to computational modeling, simulations, and advanced data analytics for scientific discovery in materials science, biology, astronomy, environmental science, energy, particle physics, genomics, and other scientific domains.
Hopper Fellows concentrate on the development and optimization of scientific and engineering applications leveraging high-speed network capability provided by the Energy Sciences Network or run on next-generation high performance computing and data systems hosted by the National Energy Research Scientific Computing Center at Berkeley Lab.
Since its founding in 2002, Berkeley Lab’s Luis W. Alvarez Postdoctoral Fellowship (go.lbl.gov/alvarez) has cultivated exceptional early-career scientists who have gone on to make outstanding contributions to computer science, mathematics, data science, and computational sciences. The Admiral Grace M. Hopper Postdoctoral Fellowship (go.lbl.gov/hopper) was first awarded in 2015 with the goal of enabling early-career scientists to make outstanding contributions in computer science and high performance computing (HPC) research.
About Computing Sciences at Berkeley Lab:
Whether running extreme-scale simulations on a supercomputer or applying machine-learning or data analysis to massive datasets, scientists today rely on advances in and integration across applied mathematics, computer science, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences Area researches, develops, and deploys new tools and technologies to meet these needs and to advance research in our core capabilities of applied mathematics, computer science, data science, and computational science. In addition to fundamental advances in our core capabilities, we impact such areas as astrophysics and cosmology, accelerator physics, chemical science and materials science, combustion, fusion energy, nuclear physics, biology, climate change, and HPC systems and network technology. Research areas in Computing Sciences include but are not limited to:
• Developing scientific applications and software technologies for extreme-scale and energy-efficient computing
• Developing mathematical modeling for complex scientific problems
• Designing algorithms to improve the performance of scientific applications
• Researching digital and post-digital computer architectures for science
• Developing and advancing extreme-scale scientific data management, analysis, and visualization
• Developing and advancing next-generation machine learning, AI, and data science approaches for science
• Advancing quantum computing and networking technologies, software, algorithms and applications
• Evaluating or developing new and promising HPC systems and networking technologies
• Researching methods to control and manage next-generation networks
• Managing scientific data and workflows in distributed environments
RequirementsQualifications:
• Requires a PhD in computer science, mathematics, computational science, or related discipline.
• Candidates must have received their PhD within the last three years.
• Expertise with advanced algorithms, software techniques, HPC systems and/or networking in a related research field.
• Demonstrated creativity and the ability to perform independent research.
• Demonstrated excellence in a related research field.
• Ability to develop new cross-disciplinary partnerships that use advanced computational and/or mathematical techniques to produce unique lab capabilities.
• Excellent communication skills with the ability to facilitate communications and collaborations with internal and external stakeholders.
Additional Desired Qualifications:
• Knowledge of advanced computing and high-performance computing.
Application Process:
1. As part of your application process, you must upload and submit the following materials with your online application:
1. Cover letter
2. CV, with publication list included
3. Research Statement — no more than five (5) pages in length when printed using standard letter-size (8.5 inch x 11 inch) paper with 1-inch margins (top, bottom, left, and right) and a font size not smaller than 11 point; figures and references cited, if included, must fit within the five-page limit.
4. Contact information (name, affiliation, and email address) of at least three (3) individuals who will be able to provide letters of reference.
2. Application deadline: October 31, 2024.
* It is highly advisable that you have all the required application materials and information ready and available prior to completing and submitting your application. Your application will not be considered complete if any of the above information is missing.
Tentative Application Timeline:
The Computing Sciences Fellowship Selection Committee is made up of a diverse representation of scientists and engineers across Berkeley Lab’s Computing Sciences Area who will conduct a thorough review of all applications received.
• Application deadline: October 31, 2024
• Review and selection: October 2024–December 2024
• Decisions made: January/February 2025
Notes:
• The selected candidates will be offered either the Luis W. Alvarez Postdoctoral Fellowship or the Admiral Grace M. Hopper Postdoctoral Fellowship based on their research focus.
• This position is expected to pay $12,010/month.
• This position is represented by a union for collective bargaining purposes.
• This position may be subject to a background check. Having a conviction history will not automatically disqualify an applicant from being considered for employment. Any convictions will be evaluated to determine if they directly relate to the responsibilities and requirements of the position.
• Work may be performed on-site, or hybrid. Work must be performed within the United States.
Want to learn more about working at Berkeley Lab? Please visit: careers.lbl.gov
How To Apply:
Apply directly online at http://50.73.55.13/counter.php?id=290350 and follow the online instructions to complete the application process.
Company DescriptionBerkeley Lab is committed to inclusion, diversity, equity and accessibility and strives to continue building community with these shared values and commitments. Berkeley Lab is an Equal Opportunity and Affirmative Action Employer. We heartily welcome applications from women, minorities, veterans, and all who would contribute to the Lab's mission of leading scientific discovery, inclusion, and professionalism. In support of our diverse global community, all qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, or protected veteran status.
Equal Opportunity and IDEA Information Links:
Know your rights, click here (http://www.dol.gov/ofccp/regs/compliance/posters/ofccpost.htm) for the supplement: Equal Employment Opportunity is the Law and the Pay Transparency Nondiscrimination Provision (https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf) under 41 CFR 60-1.4.
·
·
2024-09-23
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
Lawrence Berkeley National Laboratory
In-person
Full Time
Computational Scientist — BioHPC
·
BioHPC at UT Southwestern Medical Center
·
Dallas, TX
Hide Details
SessionJob Postings
DescriptionAre you a …
• User of HPC systems with a background in biomedical or basic science research who wants to transition into the HPC side of enabling new discoveries?
• Computer scientist or engineer with a focus on HPC-specific innovation and research?
• Programmer with Linux experience on multiple platforms or have strong C++ skills?
• Student or recent graduate wanting to broaden your skills to include HPC expertise?
BioHPC is recruiting for full-time positions and internships across a wide range of skills and experience. We operate with a “flat” organizational structure that strives to foster a community of peer computational scientists. Job duties vary by skill specialization and team members may function as generalists or as subject matter experts. You will find ample opportunities to explore different systems and applications, provide training and education, work on diverse research projects, and function as project manager of specific initiatives so that you experience intellectual stimulation and professional growth throughout your career with BioHPC.
* The positions with BioHPC are not traditional bioinformatics or data analysis roles. Instead, they focus on providing an infrastructure for scientific computing in biomedical and clinical research areas.
JOB DUTIES
• Support faculty and research teams from all BioHPC member departments at UTSW in adapting computational strategies to the specific features of the BioHPC infrastructure.
• Work with a range of systems and technologies such as compute cluster, parallel file systems, high speed interconnects, GPU-based computing and database servers.
• Automation of continuous integration and continuous deployment (CI/CD) pipelines to deliver software in the HPC environment, including containerized environments.
• Develop software and methods to explore, analyze and visualize very complex and high dimensional biological and biomedical data sets.
• Design and optimize workflows for the high-performance compute environment for data collection, data integrity, stable data flow, and ensuring data security during transfer and at rest.
• Participate with UTSW faculty in the design and execution of collaborative research studies in biomedical informatics, providing expertise on the HPC environment.
• Interact with users to understand computational research needs and develop/deliver training on the application and usage of computational systems to help accelerate the pace of scientific discovery across UTSW’s research community.
• User of HPC systems with a background in biomedical or basic science research who wants to transition into the HPC side of enabling new discoveries?
• Computer scientist or engineer with a focus on HPC-specific innovation and research?
• Programmer with Linux experience on multiple platforms or have strong C++ skills?
• Student or recent graduate wanting to broaden your skills to include HPC expertise?
BioHPC is recruiting for full-time positions and internships across a wide range of skills and experience. We operate with a “flat” organizational structure that strives to foster a community of peer computational scientists. Job duties vary by skill specialization and team members may function as generalists or as subject matter experts. You will find ample opportunities to explore different systems and applications, provide training and education, work on diverse research projects, and function as project manager of specific initiatives so that you experience intellectual stimulation and professional growth throughout your career with BioHPC.
* The positions with BioHPC are not traditional bioinformatics or data analysis roles. Instead, they focus on providing an infrastructure for scientific computing in biomedical and clinical research areas.
JOB DUTIES
• Support faculty and research teams from all BioHPC member departments at UTSW in adapting computational strategies to the specific features of the BioHPC infrastructure.
• Work with a range of systems and technologies such as compute cluster, parallel file systems, high speed interconnects, GPU-based computing and database servers.
• Automation of continuous integration and continuous deployment (CI/CD) pipelines to deliver software in the HPC environment, including containerized environments.
• Develop software and methods to explore, analyze and visualize very complex and high dimensional biological and biomedical data sets.
• Design and optimize workflows for the high-performance compute environment for data collection, data integrity, stable data flow, and ensuring data security during transfer and at rest.
• Participate with UTSW faculty in the design and execution of collaborative research studies in biomedical informatics, providing expertise on the HPC environment.
• Interact with users to understand computational research needs and develop/deliver training on the application and usage of computational systems to help accelerate the pace of scientific discovery across UTSW’s research community.
RequirementsBachelor’s degree in computer science, engineering, physics, or other field related to biomedical and computational research. Experience with HPC and/or demonstrated success in publishing computational science results is a strong plus. Master’s degree or PhD preferred.
Company DescriptionThe research computing infrastructure for UT Southwestern Medical Center is the Biomedical High Performance Computing (BioHPC) resource that includes an HPC cluster containing conventional and GPU-based nodes for parallel computing, large-scale data storage, and integration of HPC with high-performance desktop workstations. The Computational Scientist – BioHPC will work on the daily operations of the HPC system, provide user support and training, and collaborate on high-end computational research projects.
·
·
2024-10-18
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
BioHPC- UT Southwestern Medical Center
In-person
Full Time
ML Compiler Engineer, Staff - Sr Staff
·
d-Matrix Corporation
·
Toronto, Ontario
Hide Details
SessionJob Postings
DescriptionThe role: ML Compiler Engineer, Staff
What you will do:
The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:
Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).
What you will do:
The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:
Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).
RequirementsBachelor's degree in Computer Science with 7+ Yrs of relevant industry experience, MSCS Preferred with 5+ yrs of relevant industry experience.
Ability to deliver production quality code in modern C++.
Experience in modern compiler infrastructures, for example: LLVM, MLIR.
Experience in machine learning frameworks and interfaces, for example: ONNX, TensorFlow and PyTorch.
Experience in production compiler development.
Preferred:
Algorithm design ability, from high level conceptual design to actual implementation.
Experience with relevant Open Source ML projects like Torch-MLIR, ONNX-MLIR, Caffe, TVM.
Passionate about thriving in a fast-paced and dynamic startup culture.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
ML Compiler Engineer, Sr Staff - Principal
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
DescriptionThe role: ML Compiler Engineer, Sr Staff - Principal
What you will do:
The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:
Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).
What you will do:
The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:
Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).
RequirementsBachelor's degree in Computer Science with 12+ Yrs of relevant industry experience, MSCS Preferred with 7+ yrs of relevant industry experience.
Ability to deliver production quality code in modern C++.
Experience in modern compiler infrastructures, for example: LLVM, MLIR.
Experience in machine learning frameworks and interfaces, for example: ONNX, TensorFlow and PyTorch.
Experience in production compiler development.
Preferred:
Algorithm design ability, from high level conceptual design to actual implementation.
Experience with relevant Open Source ML projects like Torch-MLIR, ONNX-MLIR, Caffe, TVM.
Passionate about thriving in a fast-paced and dynamic startup culture.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Software Engineer, Senior Staff - Kernels
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
DescriptionThe role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of software kernels for next-generation AI hardware. You possess experience building software kernels for HW architectures. You possess a very strong understanding of various hardware architectures and how to map algorithms to the architecture. You understand how to map computational graphs generated by AI frameworks to the underlying architecture. You have had past experience working across all aspects of the full stack toolchain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design. You can build and scale software deliverables in a tight development window. You will work with a team of compiler experts to build out the compiler infrastructure, working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company.
RequirementsWhat you will bring:
MS or PhD in Computer Engineering, Math, Physics or related degree with 5+ years of industry experience.
Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals.
Proficient in C/C++ and Python development in Linux environment and using standard development tools.
Experience implementing algorithms in high-level languages such as C/C++ and Python.
Experience implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, and AI accelerators using libraries such as CuDA, etc.
Experience in implementing operators commonly used in ML workloads - GEMMs, Convolutions, BLAS, SIMD operators for operations like softmax, layer normalization, pooling, etc.
Experience with development for embedded SIMD vector processors such as Tensilica.
Self-motivated team player with a strong sense of ownership and leadership.
Preferred:
Prior startup, small team, or incubation experience.
Experience with ML frameworks such as TensorFlow and or PyTorch.
Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc.
Experience with a deep learning framework (such as PyTorch or Tensorflow) and ML models for CV, NLP, or Recommendation.
Work experience at a cloud provider or AI compute / sub-system company.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
AI Systems Solutions Architect
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
DescriptionThe role: AI Systems Solutions Architect
d-Matrix is looking for a AI System Solutions Architect to develop world-class products around d-Matrix inference accelerators. In this role you will be engaged with key customers and internal architects, and other key internal and external stakeholders to drive overall system solutions. This requires technically analyzing, defining outside-in usage cases, and use broad spectrum of technologies to drive a AI server system solution spanning silicon, platform HW/SW, and usages to deliver the best customer experiences with d-Matrix inference accelerators.
Design, develop, and deploy scalable GenAI inference solutions with d-Matrix accelerators
Work closely with team members across architecture, engineering, product management and business developments to optimize the d-Matrix system solutions for best performance & power balance, feature set and overall system cost.
Work closely with Datacenter, OEM and ODM customers at early stage of product concept and planning phase, to enable the system design with partners and industrial ecosystem.
Influence and shape the future generations of products and solutions by contributing to the system architecture and technology through the early engagement cycle with customers and industrial partners.
Stay abreast of the latest advancements in GenAI hardware and software technologies and assess their suitability for integration into d-Matrix GenAI inference solutions.
Establish credibility with both engineering and leadership counterparts at top technology companies, communicate technical results and positions clearly and accurately, and drive alignment on solutions.
d-Matrix is looking for a AI System Solutions Architect to develop world-class products around d-Matrix inference accelerators. In this role you will be engaged with key customers and internal architects, and other key internal and external stakeholders to drive overall system solutions. This requires technically analyzing, defining outside-in usage cases, and use broad spectrum of technologies to drive a AI server system solution spanning silicon, platform HW/SW, and usages to deliver the best customer experiences with d-Matrix inference accelerators.
Design, develop, and deploy scalable GenAI inference solutions with d-Matrix accelerators
Work closely with team members across architecture, engineering, product management and business developments to optimize the d-Matrix system solutions for best performance & power balance, feature set and overall system cost.
Work closely with Datacenter, OEM and ODM customers at early stage of product concept and planning phase, to enable the system design with partners and industrial ecosystem.
Influence and shape the future generations of products and solutions by contributing to the system architecture and technology through the early engagement cycle with customers and industrial partners.
Stay abreast of the latest advancements in GenAI hardware and software technologies and assess their suitability for integration into d-Matrix GenAI inference solutions.
Establish credibility with both engineering and leadership counterparts at top technology companies, communicate technical results and positions clearly and accurately, and drive alignment on solutions.
RequirementsWhat you will bring:
15 + Years of Industry Experience and Engineering degree in Electrical Engineering, Computer Engineering, or Computer Science with extensive experience.
5+ years of AI Server System experience by working on multiple projects from architecture, development, design including memory, I/O, power delivery, power management, boot process, FW and BMC/hardware management through bring-up and validation and supported through the release to production.
5+ years of experience in a customer-facing role interfacing with OEMs, ODMs and CSPs.
Detailed understanding of server industry standard busses, such as DDR, PCIe, CXL and other high-speed IO protocol is required.
Ability to work seamlessly across engineering disciplines and geographies to deliver excellent results.
Deep understanding of datacenter AI infrastructure requirements and challenge
Preferred:
Hands-on understanding of AI/ML infrastructure and hardware accelerators
Experience with leading AI/ML frameworks such as PyTorch, TensorFlow, ONNX, etc. and container orchestration platforms such as Kubernetes
Outstanding communication and presentation skills
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
AI Security Architect, Senior Staff
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
DescriptionThe role: AI Security Architect (Senior Staff/Principal)
d-Matrix is seeking an outstanding security architect to help integrate solid secure computing principles into our high performance AI accelerator systems that meet or exceed the needs of our datacenter customers. Taking a wholistic view of security, we incorporate security features from silicon to the upper levels of the stack to enable customers workloads to execute in a reliable and safe environment irrespective of the deployment scale.
What you will do:
As a member of the architecture team, you will contribute to hardware and software security features that enhance the next generation of our inference accelerators
This role requires to keep up the latest research in ML, architecture, and security domains, and collaborate with different partner teams including design, verification, and software
You will help assimilate customers’ security requirements to define the threat model and mitigation features for our computing systems, subsequently working with the engineering teams to incorporate them at the appropriate design levels
d-Matrix is seeking an outstanding security architect to help integrate solid secure computing principles into our high performance AI accelerator systems that meet or exceed the needs of our datacenter customers. Taking a wholistic view of security, we incorporate security features from silicon to the upper levels of the stack to enable customers workloads to execute in a reliable and safe environment irrespective of the deployment scale.
What you will do:
As a member of the architecture team, you will contribute to hardware and software security features that enhance the next generation of our inference accelerators
This role requires to keep up the latest research in ML, architecture, and security domains, and collaborate with different partner teams including design, verification, and software
You will help assimilate customers’ security requirements to define the threat model and mitigation features for our computing systems, subsequently working with the engineering teams to incorporate them at the appropriate design levels
RequirementsWhat you will bring:
MSEE with 15+ years of experience or PhD with 10+ years of applicable experience
Solid grasp through academic or industry experience in multiple of the relevant areas – computer architecture, secure computing, distributed systems, datacenter reliability/manageability, ML fundamentals
Hands-on experience with authentication, isolation, encryption, device signing, servicing in datacenters, and HW-SW feature definition of the same
Programming fluency in C/C++ or Python, ability to learn new concepts and quickly prototype for experimentation
Research background with publication record in top-tier architecture, security, or machine learning venues is highly desired
Self-motivated team player with strong sense of collaboration and initiative
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
AI Hardware Architect
·
d-Matrix Corporation
·
Santa Clara, Ca OR Seattle, WA
Hide Details
SessionJob Postings
DescriptionThe role: AI Hardware Architect
d-Matrix is seeking outstanding computer architects to help accelerate AI application performance at the intersection of both hardware and software, with particular focus on emerging hardware technologies (such as DIMC, D2D, PIM etc.) and emerging workloads (such as generative inference etc.). Our acceleration philosophy cuts through the system ranging from efficient tensor cores, storage, and data movements along with co-design of dataflow, and collective communication techniques.
What you will do:
As a member of the architecture team, you will contribute to features that power the next generation of inference accelerators in datacenters.
This role requires to keep up the latest research in ML Architecture and Algorithms space, and collaborate with different partner teams including hardware design, compiler.
Your day-to-day work will include (1) analyzing the properties of emerging machine learning algorithms and identifying functional, performance implications (2) proposing new features to enable or accelerate these algorithms, (3) studying the benefits of proposed features with performance models (analytical, cycle-level).
d-Matrix is seeking outstanding computer architects to help accelerate AI application performance at the intersection of both hardware and software, with particular focus on emerging hardware technologies (such as DIMC, D2D, PIM etc.) and emerging workloads (such as generative inference etc.). Our acceleration philosophy cuts through the system ranging from efficient tensor cores, storage, and data movements along with co-design of dataflow, and collective communication techniques.
What you will do:
As a member of the architecture team, you will contribute to features that power the next generation of inference accelerators in datacenters.
This role requires to keep up the latest research in ML Architecture and Algorithms space, and collaborate with different partner teams including hardware design, compiler.
Your day-to-day work will include (1) analyzing the properties of emerging machine learning algorithms and identifying functional, performance implications (2) proposing new features to enable or accelerate these algorithms, (3) studying the benefits of proposed features with performance models (analytical, cycle-level).
RequirementsWhat you will bring:
MS, PHD, MSEE with 3+ years of experience or PhD with 0-1 year of applicable experience.
Solid grasp through academic or industry experience in multiple of the relevant areas – computer architecture, hardware software codesign, performance modeling.
Programming fluency in C/C++ or Python.
Experience with developing architecture simulators for performance analysis, or hacking existing ones such as cycle-level simulators (gem5, GPGPU-Sim etc.) or analytical models (Timeloop, Masetro etc.).
Research background with publication record in top-tier architecture, or machine learning venues is a huge plus (such as ISCA, MICRO, ASPLOS, HPCA, DAC, MLSys etc.).
Self-motivated team player with strong sense of collaboration and initiative.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Similar Presentations
Machine Learning Engineer, Staff
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
Descriptiond-Matrix is seeking a Machine Learning Engineer to join our Algorithm Team. We’re looking for someone to invent, design, and implement efficient algorithms that will be used to optimize Large Language Model inference on DNN Accelerators we develop. You would be part of a close-knit team of mathematicians, ML researchers, and ML engineers who create and apply advanced algorithmic and numerical techniques to the most cutting-edge and high-impact research in the overlap of mathematics, ML, and modern LLM applications.
What you will do:
implement advanced quantization algorithms for modern LLMs,
support and develop numerical libraries,
participate in design of the next generation chips for multi-model training and inference.
What you will do:
implement advanced quantization algorithms for modern LLMs,
support and develop numerical libraries,
participate in design of the next generation chips for multi-model training and inference.
RequirementsWhat you will bring:
MSc or PhD in CS, statistics, physics, or a related STEM field,
4+ years of experience in industrial coding and OOP design,
high motivation to innovate and collaborate with experts from various fields,
experience in Python,
experience with transformer architecture is advantageous but not mandatory.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Machine Learning Software Engineering Intern
·
d-Matrix Corporation
·
Santa Clara
Hide Details
SessionJob Postings
DescriptionThe role: Machine Learning Software Engineering Intern
What you will do:
The Software Team at d-Matrix is looking for an ML Software Engineering Intern to join the team. Location can be Santa Clara / remote. You will be joining a team of exceptional professionals enthusiastic about tackling some of the biggest challenges of AI compute. In this role, you will work on either or multiple of the following domains: (1) develop performant implementations of SOTA ML models such as LLaMA, GPT, BERT, DLRM, etc. (2) You will develop and maintain tools for performance simulation, analysis, debugging, profiling. (3) You will develop AI infra software such as kernel compiler, inference engine, model factory, etc. (4) develop QA systems/automation software. You will engage and collaborate with the rest of the SW team to meet development milestones. You will also contribute to publication of papers and intellectual properties as applicable.
What you will do:
The Software Team at d-Matrix is looking for an ML Software Engineering Intern to join the team. Location can be Santa Clara / remote. You will be joining a team of exceptional professionals enthusiastic about tackling some of the biggest challenges of AI compute. In this role, you will work on either or multiple of the following domains: (1) develop performant implementations of SOTA ML models such as LLaMA, GPT, BERT, DLRM, etc. (2) You will develop and maintain tools for performance simulation, analysis, debugging, profiling. (3) You will develop AI infra software such as kernel compiler, inference engine, model factory, etc. (4) develop QA systems/automation software. You will engage and collaborate with the rest of the SW team to meet development milestones. You will also contribute to publication of papers and intellectual properties as applicable.
RequirementsWhat you will bring:
Minimum:
Enrolled in a Bachelor's degree in Computer Science, Electrical and Computer Engineering, or a related scientific discipline.
A problem solver, be able to break-down and simplify complex problems to come up with elegant and efficient solutions
Proficient in programming with either Python/C/C++ programming languages.
Desired:
Enrolled in either a MS or PhD in Computer Science, Electrical and Computer Engineering, or a related scientific discipline.
Understanding of CPU / GPU architectures and their memory systems.
Experience with specialized HW accelerators for deep neural networks.
Experience developing high performance kernels, simulators, debuggers, etc. targeting GPUs/Other accelerators.
Experience using Machine Learning frameworks, like PyTorch (preferrable), Tensorflow, etc.
Experience with Machine Learning compilers, like MLIR (preferrable), TVM etc.
Experience deploying inference pipelines. Experience using or developing inference engines such as vLLM, TensorRT-LLM.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Design Validation Engineering Intern
·
d-Matrix Corporation
·
Santa Clara, Ca
Hide Details
SessionJob Postings
DescriptionThe role: Design Validation Engineering Intern
What You Will Do:
- Development of tools and methodologies for silicon validation:
- Automation of data collection
-Instrument control interface
- Database back end including outlier detection and classification
- Data post processing and analysis for margining and debug
- Linear and logical regression, clustering, plot and report generation
- Work with high speed serial interfaces including PCI Express Gen5, LPDDR5 memory, and die to die interconnections on multi-chip module
What You Will Do:
- Development of tools and methodologies for silicon validation:
- Automation of data collection
-Instrument control interface
- Database back end including outlier detection and classification
- Data post processing and analysis for margining and debug
- Linear and logical regression, clustering, plot and report generation
- Work with high speed serial interfaces including PCI Express Gen5, LPDDR5 memory, and die to die interconnections on multi-chip module
RequirementsWhat you will bring:
- Familiarity with hardware systems, advanced electronic circuits, signals and systems
- Lab coursework and/or experience
- Solid knowledge and understanding of probability and statistical science
- Strong C and Python programming skills
- Excellent verbal and written communication skills
- Attending graduating year of graduate school program
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Compiler Software Engineer Intern
·
d-Matrix Corporation
·
Santa Clara, Ca OR Toronto, Ontario Canada
Hide Details
SessionJob Postings
DescriptionRole: Compiler Software Engineer Intern - Santa Clara, Ca OR Toronto, Ontario Canada
What you will do:
The Compiler Team at d-Matrix is responsible for developing the software that performs the logical-to-physical mapping of a graph expressed in an IR dialect (like Tensor Operator Set Architecture (TOSA), MHLO or Linalg) to the physical architecture of the distributed parallel memory accelerator used to execute it. It performs multiple passes over the IR to apply operations like tiling, compute resource allocation, memory buffer allocation, scheduling and code generation. You will be joining a team of exceptional people enthusiastic about developing state-of-the-art ML compiler technology. This internship position is for 3 months.
In this role you will design, implement and evaluate a method for managing floating point data types in the compiler. You will work under the guidance of two members of the compiler backend team. One, is an experienced compiler developer based in the West Coast of the US.
You will engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.
Successful completion of the project will be demonstrated by a simple model output by the compiler incorporating the your code that executes correctly on the hardware instruction set architecture (ISA) simulator. This model incorporates various number format representations for reduced precision floating point.
What you will do:
The Compiler Team at d-Matrix is responsible for developing the software that performs the logical-to-physical mapping of a graph expressed in an IR dialect (like Tensor Operator Set Architecture (TOSA), MHLO or Linalg) to the physical architecture of the distributed parallel memory accelerator used to execute it. It performs multiple passes over the IR to apply operations like tiling, compute resource allocation, memory buffer allocation, scheduling and code generation. You will be joining a team of exceptional people enthusiastic about developing state-of-the-art ML compiler technology. This internship position is for 3 months.
In this role you will design, implement and evaluate a method for managing floating point data types in the compiler. You will work under the guidance of two members of the compiler backend team. One, is an experienced compiler developer based in the West Coast of the US.
You will engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.
Successful completion of the project will be demonstrated by a simple model output by the compiler incorporating the your code that executes correctly on the hardware instruction set architecture (ISA) simulator. This model incorporates various number format representations for reduced precision floating point.
RequirementsWhat you will bring:
• Bachelor’s degree in computer science or equivalent 3 years towards an Engineering degree with emphasis on computing and mathematics coursework.
• Proficiency with C++ object-oriented programming is essential.
• Understanding of fixed point and floating-point number representations, floating point arithmetic, reduced precision floating point representations and sparse matrix storage representations and the methods used to convert between them.
• Some experience in applied computer programming (e.g. prior internship).
• Understanding of basic compiler concepts and methods used in creating compilers (ideally via a compiler course).
• Data structures and algorithms for manipulating directed acyclic graphs.
Desired:
• Familiarity of sparse matrix storage representations.
• Hands on experience with CNN, RNN, Transformer neural network architectures
• Experience with programming GPUs and specialized HW accelerator systems for deep neural networks.
• Passionate about learning new compiler development methodologies like MLIR.
• Enthusiastic about learning new concepts from compiler experts in the US and a willingness to defeat the time zone barriers to facilitate collaboration.
Company Descriptiond-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Research Associate
·
Duke University
·
Durham, NC
Hide Details
SessionJob Postings
DescriptionThe Center for Computational and Digital Health Innovation seeks an experienced Research Associate to support cutting-edge research projects across multiple disciplines. This individual will play a key role in advancing interdisciplinary computational science, providing support for high performance computing (HPC) environments, and facilitating large-scale data fabric deployment. The successful candidate will work with faculty across diverse research domains, contributing to the development and optimization of software systems that underpin groundbreaking advancements in health innovation.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
1178190 Datacom Principal Software Engineer
·
HPE
·
Frisco, TX; Austin, TX; San Jose, CA
Hide Details
SessionJob Postings
DescriptionWe are seeking an experienced and highly skilled Principal Software Engineer to lead the design, development, and optimization of Layer2/Layer3 networking software for our cutting-edge data communication products. This role requires deep expertise in IP Routing and Tunneling, along with a strong background in network protocols, high-performance software design, and a passion for innovative problem-solving.
Key Responsibilities:
• Lead Software Development: Architect, design, and implement high-performance Layer2/Layer3 networking software with a focus on scalability, reliability, and security.
• Protocol Implementation: Develop and optimize IP Routing protocols (e.g., OSPF, BGP, IS-IS) and Tunneling technologies (e.g., GRE, IPsec, MPLS) to meet product requirements.
• Team Collaboration: Mentor and guide junior engineers, ensuring best practices in coding, design, and architecture are followed.
• Performance Optimization: Analyze and improve the performance, scalability, and reliability of networking software across diverse hardware platforms.
• Cross-Functional Collaboration: Work closely with hardware, QA, and product management teams to define requirements, create technical specifications, and deliver high-quality software.
• Research and Innovation: Stay current with the latest networking technologies, trends, and industry standards. Drive innovation in software design and implementation.
• Problem Solving: Diagnose and troubleshoot complex software issues in both development and production environments.
Key Responsibilities:
• Lead Software Development: Architect, design, and implement high-performance Layer2/Layer3 networking software with a focus on scalability, reliability, and security.
• Protocol Implementation: Develop and optimize IP Routing protocols (e.g., OSPF, BGP, IS-IS) and Tunneling technologies (e.g., GRE, IPsec, MPLS) to meet product requirements.
• Team Collaboration: Mentor and guide junior engineers, ensuring best practices in coding, design, and architecture are followed.
• Performance Optimization: Analyze and improve the performance, scalability, and reliability of networking software across diverse hardware platforms.
• Cross-Functional Collaboration: Work closely with hardware, QA, and product management teams to define requirements, create technical specifications, and deliver high-quality software.
• Research and Innovation: Stay current with the latest networking technologies, trends, and industry standards. Drive innovation in software design and implementation.
• Problem Solving: Diagnose and troubleshoot complex software issues in both development and production environments.
Requirements• Education: Bachelor’s or master’s degree in Computer Science, Electrical Engineering, or a related field. Advanced degrees are preferred.
• Experience: 10+ years of experience in software development, with at least 5 years focused on networking and Datacom Layer2/Layer3.
Technical Skills:
• Extensive experience with IP Routing protocols (OSPF, BGP, IS-IS) and Tunneling technologies (GRE, IPsec, MPLS).
• Proficiency in C/C++ programming languages, with a strong understanding of system-level programming.
• Deep knowledge of network protocols, data structures, algorithms, and software architecture.
• Experience with hardware-software integration and performance optimization on embedded systems.
• Familiarity with network simulation tools, testing frameworks, and debugging techniques.
Soft Skills:
• Excellent leadership, communication, and interpersonal skills.
• Strong analytical and problem-solving abilities.
• Ability to work effectively in a fast-paced, collaborative environment.
Preferred Qualifications:
• Experience with SDN (Software Defined Networking) and NFV (Network Functions Virtualization).
• Knowledge of Linux kernel networking stack and experience with open-source networking projects.
• Contributions to industry standards and participation in relevant technical communities.
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
In-person
Full Time
1160608 Senior Architect, IP Routing and Switching
·
HPE
·
Remote
Hide Details
SessionJob Postings
DescriptionHigh Performance Computing, AI and Labs is a critical element of HPE, responsible for development of the world’s most cutting-edge, high-performance supercomputers, defining the next era of computing, delivering valuable insight and innovation. Slingshot Interconnect is the high-performance networking solution for the most powerful supercomputers in the world, like Frontier and Aurora. Join us and redefine what’s next for you.
About the Job
We are seeking an IP Routing and Switching architect to work with the Slingshot architects and product management to define and implement Layer 3 routing on Slingshot switching ASIC (Rosetta2, Rosetta3). The role is very challenging and rewarding, as the solution is not limited to managing a single switch but needs to be a Software Defined Networking solution that manages a high-performance network with potentially thousands of such switches.
• Uses domain knowledge in L3 networking to build a Slingshot routing stack from scratch.
• Leverages networking experience to suggest open-source solutions and other alternatives that can expedite the development and reduce the time to market.
• Works on implementing innovative solutions that can enable ease of management of network of Rosetta switches managed by an SDN software stack.
• Designs, develops, troubleshoots and debugs software programs for software enhancements and new products.
• Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools.
• Possesses unique mastery and recognized authority on relevant subject matter knowledge including technologies, theories and techniques.
• Provides highly innovative solutions.
• Oversees large, cross-division functional teams or projects that affect the organization's long-term goals. May participate in cross-division, multi-function teams.
How You'll Make Your Mark
• Develops organization-wide architectures and methodologies for software systems design and development across multiple platforms and organizations within the Global Business Unit.
• Identifies and evaluates new technologies, innovations, and outsourced development partner relationships for alignment with technology roadmap and business value; creates plans for integration and update into architecture.
• Reviews and evaluates designs and project activities for compliance with development guidelines and standards; provides tangible feedback to improve product quality and mitigate failure risk.
• Leverages recognized domain expertise, business acumen, and experience to influence conclusions of the business executives, outsourced development partners, and industry standards groups.
• Provides guidance and mentoring to less-experienced staff members to set an example of software systems design and development innovation and excellence.
About the Job
We are seeking an IP Routing and Switching architect to work with the Slingshot architects and product management to define and implement Layer 3 routing on Slingshot switching ASIC (Rosetta2, Rosetta3). The role is very challenging and rewarding, as the solution is not limited to managing a single switch but needs to be a Software Defined Networking solution that manages a high-performance network with potentially thousands of such switches.
• Uses domain knowledge in L3 networking to build a Slingshot routing stack from scratch.
• Leverages networking experience to suggest open-source solutions and other alternatives that can expedite the development and reduce the time to market.
• Works on implementing innovative solutions that can enable ease of management of network of Rosetta switches managed by an SDN software stack.
• Designs, develops, troubleshoots and debugs software programs for software enhancements and new products.
• Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools.
• Possesses unique mastery and recognized authority on relevant subject matter knowledge including technologies, theories and techniques.
• Provides highly innovative solutions.
• Oversees large, cross-division functional teams or projects that affect the organization's long-term goals. May participate in cross-division, multi-function teams.
How You'll Make Your Mark
• Develops organization-wide architectures and methodologies for software systems design and development across multiple platforms and organizations within the Global Business Unit.
• Identifies and evaluates new technologies, innovations, and outsourced development partner relationships for alignment with technology roadmap and business value; creates plans for integration and update into architecture.
• Reviews and evaluates designs and project activities for compliance with development guidelines and standards; provides tangible feedback to improve product quality and mitigate failure risk.
• Leverages recognized domain expertise, business acumen, and experience to influence conclusions of the business executives, outsourced development partners, and industry standards groups.
• Provides guidance and mentoring to less-experienced staff members to set an example of software systems design and development innovation and excellence.
Requirements• Bachelor’s or master’s degree in computer science, information systems, or equivalent
• Typically, 15+ years of experience
• Experience working on the software routing stack on a network operating system like Cisco, Juniper, Arista, etc., or open-source network operating system like SONiC using routing stacks like FRRouting or Quagga
• Experience working on distributed and highly available systems
• Experience in end-to-end network stack development right from network configuration handling to routing protocols to routing state distribution to programming routing state into the ASIC (optional, though will be a preferred skill)
• Experience in Software Defined Networking (optional, though will be a preferred skill as it will come handy for building a robust and scalable solution to manage a big network of Rosetta switches)
• Experience in implementing multi-tenancy solutions using technology like VxLAN (optional, though will be a preferred skill)
• Extensive coding and code review experience in programming languages like C / C++ / Java / Go
• Experience in cloud native software development (optional, though will be a preferred skill)
• Experience designing and developing software systems design tools and languages
• Excellent assessment and problem-solving skills
• Experience in overall architecture of software systems for products and solutions
• Experience designing and integrating software systems running on multiple platform types into overall architecture
• Experience evaluating and selecting forms and processes for software systems testing and methodology, including writing and execution of test plans, debugging, and testing scripts and tools
• Excellent written and verbal communication skills; mastery in English and local language
• Ability to effectively communicate product architectures, design proposals and negotiate options at business unit and executive levels
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
HPE
Remote
Full Time
1174287 Full-Stack Monitoring & AIOps Engineer
·
HPE
·
Chicago, IL
Hide Details
SessionJob Postings
DescriptionHPE Slingshot R&D team member to be located on site at Argonne National Lab (Chicago Metropolitan Area). This employee will report to HPE’s R&D manager as a team member of the Slingshot AIOps and Monitoring R&D group. Employee will also report to an ANL supervisor and is expected to work alongside other ANL staff in support of the Aurora system. ANL will provide standard on-site office environment and network access. At this time it is not expected that this person will require security clearance. The primary objective for this staffing arrangement is to improve the exchange of ideas, requirements, and diagnostics/monitoring tools and to enable HPE to implement features and future diagnostics/monitoring products to better address the requirements of ANL’s Aurora supercomputer system. This employee may also facilitate in the deployment, beta-testing and utilization of new HPE Fabric AIOps and Slingshot Monitoring software products.
Expected roles and responsibilities include:
• Employee will manage and communicate expectations to both the HPE manager and ANL supervisor regarding respective responsibilities and commitments.
• Track and facilitate in resolution to customer’s HPC interconnect issues and interface to HPE Slingshot R&D, as needed, to align the proper resources to resolve advanced issues beyond traditional break/fix.
• Assist in the diagnosis of fabric-related problems, write documentation, perform RCAs, drive upgrade planning, tool development, and other related tasks.
• Develop and program integrated software algorithms to structure, analyze and leverage structured and unstructured data in monitoring and analytics system applications.
• Can work with large scale computing frameworks, data analysis systems, and modeling environments.
• Use machine learning and statistical modeling techniques to improve product/system performance, data management, quality, and accuracy.
• Formulate descriptive, diagnostic, predictive and prescriptive insights/algorithms and translate technical specifications into code.
• Document procedures for installation and maintenance, complete programming, perform testing and debugging, define and monitor performance metrics.
• Contribute to the success of HPE by translating customer requirements and industry trends into products, solutions, and systems improvement projects.
• Contributions are expected to have measurable impact on Slingshot definition or development.
• Apply in-depth professional knowledge and innovative ideas to solve complex problems. Visible contributions improve time-to-market, achieve cost reductions, or satisfy current and future unmet customer needs.
• Recognized internal authority on key technology area applying innovative principles and ideas.
• Provide technical leadership for significant project/program work.
• Lead or participate in cross-functional initiatives and contribute to mentorship and knowledge sharing across the organization.
Expected roles and responsibilities include:
• Employee will manage and communicate expectations to both the HPE manager and ANL supervisor regarding respective responsibilities and commitments.
• Track and facilitate in resolution to customer’s HPC interconnect issues and interface to HPE Slingshot R&D, as needed, to align the proper resources to resolve advanced issues beyond traditional break/fix.
• Assist in the diagnosis of fabric-related problems, write documentation, perform RCAs, drive upgrade planning, tool development, and other related tasks.
• Develop and program integrated software algorithms to structure, analyze and leverage structured and unstructured data in monitoring and analytics system applications.
• Can work with large scale computing frameworks, data analysis systems, and modeling environments.
• Use machine learning and statistical modeling techniques to improve product/system performance, data management, quality, and accuracy.
• Formulate descriptive, diagnostic, predictive and prescriptive insights/algorithms and translate technical specifications into code.
• Document procedures for installation and maintenance, complete programming, perform testing and debugging, define and monitor performance metrics.
• Contribute to the success of HPE by translating customer requirements and industry trends into products, solutions, and systems improvement projects.
• Contributions are expected to have measurable impact on Slingshot definition or development.
• Apply in-depth professional knowledge and innovative ideas to solve complex problems. Visible contributions improve time-to-market, achieve cost reductions, or satisfy current and future unmet customer needs.
• Recognized internal authority on key technology area applying innovative principles and ideas.
• Provide technical leadership for significant project/program work.
• Lead or participate in cross-functional initiatives and contribute to mentorship and knowledge sharing across the organization.
RequirementsBachelor's or master's degree in Computer Science, Electrical Engineering, or equivalent.
• Typically, 6-10 years’ experience.
• High Performance Computing experience is nice to have.
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
In-person
Full Time
1159017 Network Architect
·
HPE
·
Remote – USA
Hide Details
SessionJob Postings
DescriptionHPE Slingshot architecture team is currently seeking a network architect to contribute to the development of their future generations of networking technology. This chosen candidate will primarily be concentrating on future Slingshot NIC architectures with a focus on network security.
To be considered for this position, the candidate should be an expert in network architecture and possess knowledge in network security. They should also be capable of evaluating various network architecture concepts and how they impact system-level design trade-off analysis and optimization. Collaboration with other network architects, ASIC designers, and NIC software engineers will be an integral part of this role.
About the Job
• Designs, assesses, develops, modifies and evaluates electrical/electronic parts, components, sub-systems, algorithms, or integrated circuitry for electrical/electronic equipment and other hardware systems.
• Conducts feasibility studies, design margin and validation analyses and empirical testing on new and modified designs.
• Assists in architecture development and assessment. Evaluates reliability of materials, properties, designs, and techniques used in production.
• May direct support personnel in the preparation of detailed design, design testing and prototype fabrication.
• Applies advanced subject matter knowledge to solve complex business issues and is regarded as a subject matter expert.
• Frequently contributes to the development of new ideas and methods.
• Works on complex problems where assessment of situations or data requires an in-depth evaluation of multiple factors.
• Provides expertise to functional project teams and may participate in cross-functional initiatives.
• Acts as an expert providing direction and guidance to process improvements and establishing policies.
• May provide mentoring and guidance to lower-level employees.
• Oversees multiple project teams of other electrical hardware engineers and internal and outsourced development partners responsible for all stages of electrical hardware design and development for complex products, solutions, and platforms, including design, validation, tooling and testing.
• Manages and expands relationships with internal and outsourced development partners on electrical hardware design and development.
• Reviews and evaluates designs and project activities for compliance with technology and development guidelines and standards; provides tangible feedback to improve product quality.
• Provides domain-specific expertise and overall electrical/electronic hardware and platform leadership and perspective to cross-organization projects, programs, and activities.
• Drives innovation and integration of new technologies into projects and activities in the electrical hardware design organization.
• Provides guidance and mentoring to less-experienced staff members.
To be considered for this position, the candidate should be an expert in network architecture and possess knowledge in network security. They should also be capable of evaluating various network architecture concepts and how they impact system-level design trade-off analysis and optimization. Collaboration with other network architects, ASIC designers, and NIC software engineers will be an integral part of this role.
About the Job
• Designs, assesses, develops, modifies and evaluates electrical/electronic parts, components, sub-systems, algorithms, or integrated circuitry for electrical/electronic equipment and other hardware systems.
• Conducts feasibility studies, design margin and validation analyses and empirical testing on new and modified designs.
• Assists in architecture development and assessment. Evaluates reliability of materials, properties, designs, and techniques used in production.
• May direct support personnel in the preparation of detailed design, design testing and prototype fabrication.
• Applies advanced subject matter knowledge to solve complex business issues and is regarded as a subject matter expert.
• Frequently contributes to the development of new ideas and methods.
• Works on complex problems where assessment of situations or data requires an in-depth evaluation of multiple factors.
• Provides expertise to functional project teams and may participate in cross-functional initiatives.
• Acts as an expert providing direction and guidance to process improvements and establishing policies.
• May provide mentoring and guidance to lower-level employees.
• Oversees multiple project teams of other electrical hardware engineers and internal and outsourced development partners responsible for all stages of electrical hardware design and development for complex products, solutions, and platforms, including design, validation, tooling and testing.
• Manages and expands relationships with internal and outsourced development partners on electrical hardware design and development.
• Reviews and evaluates designs and project activities for compliance with technology and development guidelines and standards; provides tangible feedback to improve product quality.
• Provides domain-specific expertise and overall electrical/electronic hardware and platform leadership and perspective to cross-organization projects, programs, and activities.
• Drives innovation and integration of new technologies into projects and activities in the electrical hardware design organization.
• Provides guidance and mentoring to less-experienced staff members.
Requirements• Bachelor's or master's degree in Electrical Engineering.
• Typically 6-10 years of experience.
• Expert in network architecture and possess knowledge in network security.
• Capable of evaluating various network architecture concepts and how they impact system-level design trade-off assessment and optimization.
• Collaboration with other network architects, ASIC designers, and NIC software engineers will be an integral part of this role.
• Excellent analytical and problem solving skills.
• Experience in overall network architecture for products and solutions.
• Designing and integrating electronic components, integrated circuitry, and algorithms into overall architecture.
• Evaluating and proposing forms of empirical analysis, modeling and testing methodologies to validate component, circuit, and hardware designs and thermal/emissions management.
• Excellent written and verbal communication skills; mastery in English and local language.
• Ability to effectively communicate product architectures, design proposals and negotiate options at senior management levels.
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
Remote
Full Time
1179359 Senior Power Integrity Engineer
·
HPE
·
Chippewa Falls, WI; Houston, TX; Spring, TX
Hide Details
SessionJob Postings
DescriptionThe Power Integrity engineer will use sophisticated simulation tools and laboratory measurement to design, simulate, validate, debug, and optimize high performance electrical systems. The engineer will work with a diverse team of engineers that support the design, validation, scaling, deployment, and support of industry-leading supercomputer systems. A curious mind and attention to detail are key to success in this role.
Responsibilities:
• Design, validate and characterize power conversion and delivery systems for large computer infrastructure
• Development of power distribution network (breakers, connectors, bus bars) for 400+ kW server cabinet infrastructure
• Create simulation models, plan and execute simulation suites
• Correlate models to measured laboratory results
• Work with vendors to specify, procure, and validate new power supply designs
• Analyze and verify PCB power delivery networks
• Measure and characterize AC response of low-voltage, quiet voltage rails on advanced processor node boards
• Track and drive the resolution of electrical issues, execute hands-on testing and debugging to root cause
• Create documentation, test plans, and test reports
Responsibilities:
• Design, validate and characterize power conversion and delivery systems for large computer infrastructure
• Development of power distribution network (breakers, connectors, bus bars) for 400+ kW server cabinet infrastructure
• Create simulation models, plan and execute simulation suites
• Correlate models to measured laboratory results
• Work with vendors to specify, procure, and validate new power supply designs
• Analyze and verify PCB power delivery networks
• Measure and characterize AC response of low-voltage, quiet voltage rails on advanced processor node boards
• Track and drive the resolution of electrical issues, execute hands-on testing and debugging to root cause
• Create documentation, test plans, and test reports
Requirements• Bachelor's or master's degree in Electrical Engineering
• 6+ years' experience in power conversation and distribution
• Be comfortable working with High Voltage, 3-phase power systems
• Experience in power system architecture
• Experience working with cross-functional teams including mechanical engineering, cooling systems, and electrical design
• Experience using electrical design tools and software packages such as Spice, Ansoft, Allegro, PowerDC, PIPro, SIMPLI
• Knowledge of voltage converter architectures
• Experience with control loop analysis and design
• Familiarity with 400V 3-phase electrical systems is a plus
• Ability to apply analytical and problem-solving skills
• Experience in at least one common scripting language is a plus (e.g. perl, bash, Python)
• Excel spreadsheet skills
• Strong written and verbal communication skills
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
In-person
Full Time
1179358 System/Validation Engineer
·
HPE
·
Spring, TX
Hide Details
SessionJob Postings
DescriptionWe are seeking a System/Validation Engineer for HPC AI in Houston, Texas.
• Designs, analyzes, develops, modifies and evaluates electrical/electronic parts, components, sub-systems, algorithms, or integrated circuitry for electrical/electronic equipment and other hardware systems.
• Performs design margin and validation analyses and empirical testing on new and modified designs.
• Assists in architecture development and assessment.
• Evaluates reliability of materials, properties, designs, and techniques used in production.
• May direct support personnel in the preparation of detailed design, design testing and prototype fabrication.
• Contributions impact technical components of HPE products, solutions, or services regularly and sustainable.
• Applies advanced subject matter knowledge to solve complex business issues and is regarded as a subject matter expert.
• Provides expertise and partnership to functional and technical project teams and may participate in cross-functional initiatives.
• Exercises significant independent judgment to determine best method for achieving objectives.
• May provide team leadership and mentoring to others.
Responsibilities:
• Supports multiple electrical hardware engineers and internal and outsourced development partners responsible for all stages of electrical hardware design and development for complex products, solutions, and platforms, including design, validation, tooling and testing.
• Reviews and evaluates designs and project activities for compliance with technology and development guidelines and standards; provides tangible feedback to improve product quality.
• Provides domain-specific expertise and overall electrical/electronic hardware and platform expertise at an HPE function level, plus system-level support.
• Drives innovation and integration of new technologies into projects and activities in the electrical hardware design organization.
• Provides guidance and mentoring to less-experienced staff members.
• Designs, analyzes, develops, modifies and evaluates electrical/electronic parts, components, sub-systems, algorithms, or integrated circuitry for electrical/electronic equipment and other hardware systems.
• Performs design margin and validation analyses and empirical testing on new and modified designs.
• Assists in architecture development and assessment.
• Evaluates reliability of materials, properties, designs, and techniques used in production.
• May direct support personnel in the preparation of detailed design, design testing and prototype fabrication.
• Contributions impact technical components of HPE products, solutions, or services regularly and sustainable.
• Applies advanced subject matter knowledge to solve complex business issues and is regarded as a subject matter expert.
• Provides expertise and partnership to functional and technical project teams and may participate in cross-functional initiatives.
• Exercises significant independent judgment to determine best method for achieving objectives.
• May provide team leadership and mentoring to others.
Responsibilities:
• Supports multiple electrical hardware engineers and internal and outsourced development partners responsible for all stages of electrical hardware design and development for complex products, solutions, and platforms, including design, validation, tooling and testing.
• Reviews and evaluates designs and project activities for compliance with technology and development guidelines and standards; provides tangible feedback to improve product quality.
• Provides domain-specific expertise and overall electrical/electronic hardware and platform expertise at an HPE function level, plus system-level support.
• Drives innovation and integration of new technologies into projects and activities in the electrical hardware design organization.
• Provides guidance and mentoring to less-experienced staff members.
Requirements• On-site position in Houston, Texas (can offer relocation)
• Must be a U.S. citizen
• Bachelor's or master's degree in Electrical Engineering preferred
• Typically, 6-10 years of experience in electrical engineering
• Using electrical design tools and software packages
• Excellent analytical and problem-solving skills
• Experience in overall architecture of electronic hardware for products and solutions
• Designing and integrating electronic components, integrated circuitry, and algorithms into overall architecture
• System management-level skills to support installations, hardware updates, system management, firmware and system-level software updates
• Evaluating and proposing forms of empirical analysis, modelling and testing methodologies to validate component, circuit, and hardware designs and thermal/emissions management
• Excellent written and verbal communication skills; mastery in English and local language
Company DescriptionHPE is the global edge-to-cloud company built to transform your business. How? By helping you connect, protect, analyze, and act on all your data and applications wherever they live, from edge to cloud, so you can turn insights into outcomes at the speed required to thrive in today’s complex world.
·
·
2024-10-17
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
In-person
Full Time
1178141 HPC/AI MPI Ecosystem Software Engineer
·
HPE
·
Fort Collins, CO
Hide Details
SessionJob Postings
DescriptionJoin the HPE AI Fabric team and be a part of the growth and evolution of Artificial Intelligence (AI), high speed networking fabrics, and the fastest growing and most significant technology revolution since the Internet.
Responsibilities include, but are not limited to:
• Engage and work with the Commercial HPC and AI ISV and open-source SW communities to validate, tune, and enable applications on the Slingshot Ethernet fabric.
• Enable the broad MPI ecosystem (OpenMPI, Intel MPI, Cray MPI, other distributions) by working with application and MPI vendors to target, tune, and ensure market-leading performance.
• Design, implement and maintain system software that enables communication between GPUS, CPUs, and storage in scale out AI and HPC systems.
• Work with all the leading architectures and vendors in the AI and Data Center markets — NVIDIA, AMD, Intel.
• Work with the OEM, ODM, and VAR channels vendors on bringing Slingshot to a broader set of customers. Validate and tune applications driving those engagements.
• Develop and own HPE product usage support, upstreaming and community engagements, and internal testing and infrastructure.
• Work with cross-disciplinary teams to understand business requirements and align software direction to meet those needs.
Responsibilities include, but are not limited to:
• Engage and work with the Commercial HPC and AI ISV and open-source SW communities to validate, tune, and enable applications on the Slingshot Ethernet fabric.
• Enable the broad MPI ecosystem (OpenMPI, Intel MPI, Cray MPI, other distributions) by working with application and MPI vendors to target, tune, and ensure market-leading performance.
• Design, implement and maintain system software that enables communication between GPUS, CPUs, and storage in scale out AI and HPC systems.
• Work with all the leading architectures and vendors in the AI and Data Center markets — NVIDIA, AMD, Intel.
• Work with the OEM, ODM, and VAR channels vendors on bringing Slingshot to a broader set of customers. Validate and tune applications driving those engagements.
• Develop and own HPE product usage support, upstreaming and community engagements, and internal testing and infrastructure.
• Work with cross-disciplinary teams to understand business requirements and align software direction to meet those needs.
Requirements• Bachelor’s or master's degree in computer science, engineering, or related field
• 10+ years of relevant experience with a background in networking and communications software development and/or architecture in the data center, university, government lab, or AI-centric environments
• Background in MPI software development with an emphasis on HPC applications development, tuning, and deployment in a scale out compute cluster environment
• Ability to participate and own pieces of the product release pipeline up to and including package integration and support
• Deep understanding of networking architecture and communications including Ethernet and InfiniBand networking technologies
• Understanding of computer architecture, and familiarity with the fundamentals of GPU architecture
• Experience with NVIDIA and AMD GPU infrastructure and software stacks
• Programming and debug skills in C, C++ and Python
• Ability to understand how applications and industry middleware/libraries work in Slingshot-enabled systems and identify strategies and ideas for allowing these applications to work to customer expectations
• Experience with user-based networking and OFI libfabric software interfaces and APIs
Company DescriptionHewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world.
Our culture thrives on finding new and better ways to accelerate what’s next. We know diverse backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career, our culture will embrace you. Open up opportunities with HPE.
·
·
2024-10-25
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
HPE
In-person
Full Time
1181290 Senior Board Design Engineer
·
HPE
·
Remote - USA
Hide Details
SessionJob Postings
DescriptionJob Description
Are you passionate about High Performance Computing (HPC) and Artificial Intelligence (AI)? Do you want to be a part of the team that designs the next fastest supercomputer? This position is for an experienced Electrical Design Engineer to design complex PCAs for high performance computing systems. The Electrical Design Engineer may work independently or with a senior electrical design engineer to design, develop, bring-up, validate, and release leading edge, high performance computing PCAs and modules into production.
Primary Duties and Responsibilities:
Be able to independently design new and derivative PCAs based on system and board design requirements
Work closely with power, mechanical, thermal, signal integrity, firmware engineers as well as vendors to develop leading edge, innovative, cost effective system solutions for the high-performance computing industry
Conduct reviews of your designs with the design and operations teams, and contribute to team design reviews per HPE’s product design, development, and release processes
Bring-up, debug, and validate PCAs in an engineering test lab environment working independently or as part of a product development team
Release new PCA designs into manufacturing and support root cause issue resolution during volume production ramp up of those designs
Track and document issue discovery, investigation plans/results, issue resolution, and issue closure in issue tracking data bases
Create documentation, specifications, test plans, and manufacturing assembly/build instructions
Work independently and/or participate in task teams addressing complex system/problem analysis
Are you passionate about High Performance Computing (HPC) and Artificial Intelligence (AI)? Do you want to be a part of the team that designs the next fastest supercomputer? This position is for an experienced Electrical Design Engineer to design complex PCAs for high performance computing systems. The Electrical Design Engineer may work independently or with a senior electrical design engineer to design, develop, bring-up, validate, and release leading edge, high performance computing PCAs and modules into production.
Primary Duties and Responsibilities:
Be able to independently design new and derivative PCAs based on system and board design requirements
Work closely with power, mechanical, thermal, signal integrity, firmware engineers as well as vendors to develop leading edge, innovative, cost effective system solutions for the high-performance computing industry
Conduct reviews of your designs with the design and operations teams, and contribute to team design reviews per HPE’s product design, development, and release processes
Bring-up, debug, and validate PCAs in an engineering test lab environment working independently or as part of a product development team
Release new PCA designs into manufacturing and support root cause issue resolution during volume production ramp up of those designs
Track and document issue discovery, investigation plans/results, issue resolution, and issue closure in issue tracking data bases
Create documentation, specifications, test plans, and manufacturing assembly/build instructions
Work independently and/or participate in task teams addressing complex system/problem analysis
RequirementsAre you passionate about High Performance Computing (HPC) and Artificial Intelligence (AI)? Do you want to be a part of the team that designs the next fastest supercomputer? This position is for an experienced Electrical Design Engineer to design complex PCAs for high performance computing systems. The Electrical Design Engineer may work independently or with a senior electrical design engineer to design, develop, bring-up, validate, and release leading edge, high performance computing PCAs and modules into production.
Primary Duties and Responsibilities:
Be able to independently design new and derivative PCAs based on system and board design requirements
Work closely with power, mechanical, thermal, signal integrity, firmware engineers as well as vendors to develop leading edge, innovative, cost effective system solutions for the high-performance computing industry
Conduct reviews of your designs with the design and operations teams, and contribute to team design reviews per HPE’s product design, development, and release processes
Bring-up, debug, and validate PCAs in an engineering test lab environment working independently or as part of a product development team
Release new PCA designs into manufacturing and support root cause issue resolution during volume production ramp up of those designs
Track and document issue discovery, investigation plans/results, issue resolution, and issue closure in issue tracking data bases
Create documentation, specifications, test plans, and manufacturing assembly/build instructions
Work independently and/or participate in task teams addressing complex system/problem analysis
Background and Experience:
Required: BSEE or MSEE with 8 years of design experience in server/compute node or high speed network switch PCA design
Proficiency with Allegro Design Entry HDL or similar schematic capture tools. Hierarchical schematic design experience is a plus
Experience with Allegro PCB layout tools for PCB design review, helping with constraint manager setup is desired
Advanced knowledge of signal integrity principles/techniques as well as voltage regulator design is desired
Knowledge of server grade x86 and/or ARM and/or GPU processors is a plus
Knowledge of processor complex/memory design and FPGAs is a plus
Knowledge of high speed networks such as 100G/200G/400G Ethernet or Infiniband, it is a plus
You will have excellent written and oral communication skills
Some travel may be required
·
·
2024-11-15
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
1181430 HPC Options Qualification Engineer
·
HPE
·
Remote - USA
Hide Details
SessionJob Postings
Descriptionob Family Definition:
Designs, develops, troubleshoots and debugs software programs for software enhancements and new products. Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools. Determines hardware compatibility and/or influences hardware design.
Management Level Definition:
Contributions include applying intermediate level of subject matter expertise to solve common technical problems. Acts as an informed team member providing analysis of information and recommendations for appropriate action. Works independently within an established framework and with moderate supervision.
Responsibilities:
Designs limited enhancements, updates, and programming changes for portions and subsystems of systems software, including operating systems, compliers, networking, utilities, databases, and Internet-related tools.
Analyzes design and determines coding, programming, and integration activities required based on specific objectives and established project guidelines.
Executes and writes portions of testing plans, protocols, and documentation for assigned portion of application; identifies and debugs issues with code and suggests changes or improvements.
Participates as a member of project team of other software systems engineers and internal and outsourced development partners to develop reliable, cost effective and high quality solutions for assigned systems portion or subsystem.
Collaborates and communicates with internal and outsourced development partners regarding software systems design status, project progress, and issue resolution.
Designs, develops, troubleshoots and debugs software programs for software enhancements and new products. Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools. Determines hardware compatibility and/or influences hardware design.
Management Level Definition:
Contributions include applying intermediate level of subject matter expertise to solve common technical problems. Acts as an informed team member providing analysis of information and recommendations for appropriate action. Works independently within an established framework and with moderate supervision.
Responsibilities:
Designs limited enhancements, updates, and programming changes for portions and subsystems of systems software, including operating systems, compliers, networking, utilities, databases, and Internet-related tools.
Analyzes design and determines coding, programming, and integration activities required based on specific objectives and established project guidelines.
Executes and writes portions of testing plans, protocols, and documentation for assigned portion of application; identifies and debugs issues with code and suggests changes or improvements.
Participates as a member of project team of other software systems engineers and internal and outsourced development partners to develop reliable, cost effective and high quality solutions for assigned systems portion or subsystem.
Collaborates and communicates with internal and outsourced development partners regarding software systems design status, project progress, and issue resolution.
RequirementsBachelor's or Master's degree in Computer Science, Information Systems, or equivalent.
Typically 2-4 years experience.
Knowledge and Skills:
Using software systems design tools and languages.
Ability to apply analytical and problem solving skills.
Designing software systems running on multiple platform types.
Software systems testing methodology, including execution of test plans, debugging, and testing scripts and tools.
Strong written and verbal communication skills; mastery in English and local language. Ability to effectively communicate design proposals and negotiate options
·
·
2024-11-15
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Storage Strategist
·
Hudson River Trading
·
New York, NY
Hide Details
SessionJob Postings
DescriptionThe Research & Development team at Hudson River Trading (HRT) builds and maintains the computers, networks, data storage, operating systems, and software that allow our trading strategies and research environment to operate worldwide 24/7. We are looking for a Storage Strategist who enjoys being challenged, appreciates an open and collaborative organizational structure, and thrives in a fast-paced hands-on environment.
In this role, you will own the vision, planning and execution of HRT’s current and future storage needs. Given HRT’s growth and ever-evolving needs (especially when it comes to AI), this is a unique opportunity to set the direction for future storage solutions and work across engineering teams to deliver it.
You will drive engagement with all stakeholders — internal users and both external and internal technology providers — understand their product portfolio and roadmap, and build a strategy for HRT that is inclusive of financial modeling.
Responsibilities:
Own the overall charter of delivering performant, reliable, and future-proof storage solutions for all of HRT storage needs
Collaborate and drive efforts across multiple (internal) cross-functional engineering teams
Improve user experience by understanding storage workloads and designing a variety of solutions and tools
Forecast future storage needs, research new storage solutions, and build a five and ten year roadmap
Draw insights from a deep understanding of hardware infrastructure and industry trends to inform the roadmap and planning decision
Review deliverables and provide relevant guidance to engineering teams
Troubleshoot complex storage, OS, and networking issues
Integrate storage systems with HPC clusters, ensuring compatibility with existing hardware and software
Participate in an on-call rotation to ensure continuous support for HRT’s operations, responding promptly to critical storage issues and incidents
We are open to any HRT office location and offer WFH flexibility.
In this role, you will own the vision, planning and execution of HRT’s current and future storage needs. Given HRT’s growth and ever-evolving needs (especially when it comes to AI), this is a unique opportunity to set the direction for future storage solutions and work across engineering teams to deliver it.
You will drive engagement with all stakeholders — internal users and both external and internal technology providers — understand their product portfolio and roadmap, and build a strategy for HRT that is inclusive of financial modeling.
Responsibilities:
Own the overall charter of delivering performant, reliable, and future-proof storage solutions for all of HRT storage needs
Collaborate and drive efforts across multiple (internal) cross-functional engineering teams
Improve user experience by understanding storage workloads and designing a variety of solutions and tools
Forecast future storage needs, research new storage solutions, and build a five and ten year roadmap
Draw insights from a deep understanding of hardware infrastructure and industry trends to inform the roadmap and planning decision
Review deliverables and provide relevant guidance to engineering teams
Troubleshoot complex storage, OS, and networking issues
Integrate storage systems with HPC clusters, ensuring compatibility with existing hardware and software
Participate in an on-call rotation to ensure continuous support for HRT’s operations, responding promptly to critical storage issues and incidents
We are open to any HRT office location and offer WFH flexibility.
Requirements* 10+ years of experience in HPC-like environments with multi-PB storage deployments
* Experience in developing advanced storage software solutions and/or leading software development team(s)
* Deep expertise in HPC and scale-out enterprise storage solutions
* Knowledge of distributed file systems used for large-scale cluster computing (Lustre, GPFS, WEKA, S3, CEPH, etc.)
* Strong leadership, communication, and stakeholder management skills
* Deep technical understanding of commodity storage technologies
* Knowledgeable on storage industry trends
* Deep technical understanding of server architecture, design, and development, with emphasis on power and performance analysis
* Ability to analyze and solve problems under quick turnaround times
* Ability to manage time efficiently, balancing independent and collaborative workflows
* Proficient in Python and UNIX/Linux shell scripting
Company DescriptionHudson River Trading (HRT) brings a scientific approach to trading financial products. We have built one of the world's most sophisticated computing environments for research and development. Our researchers are at the forefront of innovation in the world of algorithmic trading.
At HRT we welcome a variety of expertise: mathematics and computer science, physics and engineering, media and tech. We’re a community of self-starters who are motivated by the excitement of being at the cutting edge of automation in every part of our organization—from trading, to business operations, to recruiting and beyond. We value openness and transparency, and celebrate great ideas from HRT veterans and new hires alike. At HRT we’re friends and colleagues – whether we are sharing a meal, playing the latest board game, or writing elegant code. We embrace a culture of togetherness that extends far beyond the walls of our office.
Feel like you belong at HRT? Our goal is to find the best people and bring them together to do great work in a place where everyone is valued. HRT is proud of our diverse staff; we have offices all over the globe and benefit from our varied and unique perspectives. HRT is an equal opportunity employer; so whoever you are we’d love to get to know you.
·
·
2024-11-19
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
HPC Systems Architect
·
IMC Trading
·
Chicago, IL
Hide Details
SessionJob Postings
DescriptionAt IMC, we thrive on pushing boundaries, embracing innovation, and leveraging cutting-edge technology to stay ahead in the competitive trading landscape. We are seeking an exceptional HPC Systems Architect who will lead the charge in designing, implementing, and optimizing our global HPC infrastructure, driving innovation, and delivering scalable solutions that empower our advanced computational needs.
Your Core Responsibilities:
• Architect and oversee the implementation of advanced HPC systems tailored to high-impact applications, including parallel computing and machine learning workloads
• Integrate cutting-edge technologies to enhance computational power and capabilities
• Evaluate and select best-in-class hardware and software solutions, optimizing our infrastructure for peak performance, scalability, efficiency, and reliability
• Partner with global teams to establish and enforce architectural standards and best practices across HPC environments
• Ensure seamless interoperability between different systems and teams, creating a cohesive, high-performance environment that aligns with IMC’s long-term business goals
Your Core Responsibilities:
• Architect and oversee the implementation of advanced HPC systems tailored to high-impact applications, including parallel computing and machine learning workloads
• Integrate cutting-edge technologies to enhance computational power and capabilities
• Evaluate and select best-in-class hardware and software solutions, optimizing our infrastructure for peak performance, scalability, efficiency, and reliability
• Partner with global teams to establish and enforce architectural standards and best practices across HPC environments
• Ensure seamless interoperability between different systems and teams, creating a cohesive, high-performance environment that aligns with IMC’s long-term business goals
Requirements• Bachelor’s, Master’s, or PhD degree in Computer Science, Electrical Engineering, or a related field
• 5+ years of experience in HPC architecture, system design, or a similar role within a large-scale compute environment
• Expertise in the following areas:
- HPC system design and optimization
- Parallel computing
- Linux systems administration and enterprise storage solutions (e.g., Vast, DDN, Isilon)
- HPC management tools (e.g., Kubernetes, Docker, Slurm)
- High-performance processors and compute offload devices (e.g., GPUs, FPGAs)
- Low-latency network architecture, including high-speed interconnects (e.g., InfiniBand, Ethernet)
- Datacenter design and optimization
- AI/ML frameworks and their integration into HPC systems
- Programming languages such as Python, Bash, C++, or similar
- Exceptional communication skills, with the ability to translate complex technical concepts for non-technical audiences
- Proven ability to influence decision-making and align global teams to achieve a unified vision
Company DescriptionIMC is a leading trading firm, known worldwide for our advanced, low-latency technology and world-class execution capabilities. Over the past 30 years, we’ve been a stabilizing force in the financial markets — providing the essential liquidity our counterparties depend on. Across offices in the U.S., Europe, and Asia Pacific, our talented employees are united by our entrepreneurial spirit, exceptional culture, and commitment to giving back. It's a strong foundation that allows us to grow and add new capabilities, year after year. From entering dynamic new markets, to developing a state-of-the-art research environment and diversifying our trading strategies, we dare to imagine what could be and work together to make it happen.
·
·
2024-11-08
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Open Rank Professors in Intelligent System Engineering
·
Indiana University
·
Bloomington, IN
Hide Details
SessionJob Postings
DescriptionIndiana University
Luddy School of Informatics, Computing, and Engineering
Open Rank Professors in the Intelligent Systems Engineering Department
The Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invites applications for multiple tenure-track/tenured open-rank professor positions (assistant, associate, or full professor) in the Department of Intelligent Systems Engineering (ISE) to begin on August 1, 2025. ISE is an innovative program that focuses on the intersection of intelligent computing methods and systems engineering.
We are particularly interested in hiring in the academic domain of computer systems engineering, including software control systems, domain specific architectures, energy efficient computing, zero trust, high performance computing, data engineering at scale, real-time predictive analytics and control, AI systems, physical artificial intelligence, cyber-physics systems, and mechatronics.
We seek candidates who can demonstrate an outstanding scholarly record of research as appropriate to rank and exhibited by high-impact peer-reviewed publications, a forward-looking externally funded research agenda, and a commitment to the education of both graduate and undergraduate students.
As IU’s flagship research institution, IU Bloomington is committed to being a welcoming and inclusive campus community. We seek candidates who will pursue the highest standards of academic excellence and whose research, teaching, and community engagement efforts contribute to welcoming, respectful, and inclusive learning and working environments for our students, staff, and faculty.
Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment based on individual qualifications. Indiana University prohibits discrimination based on age, ethnicity, color, race, religion, sex, sexual orientation, gender identity or expression, genetic information, marital status, national origin, disability status or protected veteran status.
Before a conditional offer of employment with tenure is finalized, candidates will be asked to disclose any pending investigations or previous findings of sexual or professional misconduct. They will also be required to authorize an inquiry by Indiana University Bloomington with all current and former employers along these lines. The relevance of information disclosed or ascertained in the context of this process to a candidate’s eligibility for hire will be evaluated by Indiana University Bloomington on a case-by-case basis. Applicants should be aware, however, that Indiana University Bloomington takes the matters of sexual and professional misconduct very seriously.
For detailed information on employee benefits please visit:
https://hr.iu.edu/
Links to additional information which may be of interest:
https://luddy.indiana.edu/index.html
https://www.indiana.edu/
https://vpfaa.indiana.edu/index.html
https://www.indiana.edu/hoosier-life/index.html
https://www.visitbloomington.com/
Luddy School of Informatics, Computing, and Engineering
Open Rank Professors in the Intelligent Systems Engineering Department
The Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington invites applications for multiple tenure-track/tenured open-rank professor positions (assistant, associate, or full professor) in the Department of Intelligent Systems Engineering (ISE) to begin on August 1, 2025. ISE is an innovative program that focuses on the intersection of intelligent computing methods and systems engineering.
We are particularly interested in hiring in the academic domain of computer systems engineering, including software control systems, domain specific architectures, energy efficient computing, zero trust, high performance computing, data engineering at scale, real-time predictive analytics and control, AI systems, physical artificial intelligence, cyber-physics systems, and mechatronics.
We seek candidates who can demonstrate an outstanding scholarly record of research as appropriate to rank and exhibited by high-impact peer-reviewed publications, a forward-looking externally funded research agenda, and a commitment to the education of both graduate and undergraduate students.
As IU’s flagship research institution, IU Bloomington is committed to being a welcoming and inclusive campus community. We seek candidates who will pursue the highest standards of academic excellence and whose research, teaching, and community engagement efforts contribute to welcoming, respectful, and inclusive learning and working environments for our students, staff, and faculty.
Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment based on individual qualifications. Indiana University prohibits discrimination based on age, ethnicity, color, race, religion, sex, sexual orientation, gender identity or expression, genetic information, marital status, national origin, disability status or protected veteran status.
Before a conditional offer of employment with tenure is finalized, candidates will be asked to disclose any pending investigations or previous findings of sexual or professional misconduct. They will also be required to authorize an inquiry by Indiana University Bloomington with all current and former employers along these lines. The relevance of information disclosed or ascertained in the context of this process to a candidate’s eligibility for hire will be evaluated by Indiana University Bloomington on a case-by-case basis. Applicants should be aware, however, that Indiana University Bloomington takes the matters of sexual and professional misconduct very seriously.
For detailed information on employee benefits please visit:
https://hr.iu.edu/
Links to additional information which may be of interest:
https://luddy.indiana.edu/index.html
https://www.indiana.edu/
https://vpfaa.indiana.edu/index.html
https://www.indiana.edu/hoosier-life/index.html
https://www.visitbloomington.com/
RequirementsApplicants should have a demonstrable potential for (for junior level) or an established record of (for senior level) excellence in research and teaching and a PhD (or ScD) in Engineering, Computer Science, or a related scientific discipline expected to be awarded prior to August 2025.
Company DescriptionIndiana University Bloomington is a public research university in Bloomington, Indiana, United States. It is the flagship campus of Indiana University and its largest campus, with over 40,000 students.
·
·
2024-11-01
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Similar Presentations
HPC Systems Administrator
·
Indiana University
·
Bloomington, IN
Hide Details
SessionJob Postings
DescriptionDepartment Information
The High Performance Systems group is seeking an individual with skills and interests in Linux systems and cluster administration in support of Indiana University's computational research systems. Primary responsibilities include maintaining existing research clusters, supporting custom lab-specific co-located systems, and assisting with the administration of research infrastructure. Additionally, this position will play a role in the design and development of new systems to enhance the services we provide to the IU research community. Our work is essential in furthering the university's research mission, which spans scores of academic disciplines; join us and help Indiana University faculty and staff continue to break new ground.
Job Summary
• Configures, tests, troubleshoots, upgrades/modifies, and maintains file, print, application, web, database servers and related technologies, including hardware/software configuration and installation, operating system installation and support, security and configuration, backup strategies, business continuity strategies, and institutes best practices on modernizing systems in relation to changing technologies.
• Establishes metrics and monitors systems configuration(s) to ensure data integrity and optimum system performance metrics are obtained.
• Implements system architectural plans, design modifications, and ensures compliance with federal and university policies and standards.
• Provides experienced analysis and evaluates new capabilities and emerging technologies; implements new systems and improves existing ones all the while making sure established protocols and procedures are followed.
• Applies comprehensive knowledge to bug reporting and isolation, test case authoring and refinement, automation scripts, and works closely with other teams (engineering, cross functional and cross campus) to resolve problems.
• Documents systems administration practices and processes (testing, upgrades/modifications, issue/problem resolution).
The High Performance Systems group is seeking an individual with skills and interests in Linux systems and cluster administration in support of Indiana University's computational research systems. Primary responsibilities include maintaining existing research clusters, supporting custom lab-specific co-located systems, and assisting with the administration of research infrastructure. Additionally, this position will play a role in the design and development of new systems to enhance the services we provide to the IU research community. Our work is essential in furthering the university's research mission, which spans scores of academic disciplines; join us and help Indiana University faculty and staff continue to break new ground.
Job Summary
• Configures, tests, troubleshoots, upgrades/modifies, and maintains file, print, application, web, database servers and related technologies, including hardware/software configuration and installation, operating system installation and support, security and configuration, backup strategies, business continuity strategies, and institutes best practices on modernizing systems in relation to changing technologies.
• Establishes metrics and monitors systems configuration(s) to ensure data integrity and optimum system performance metrics are obtained.
• Implements system architectural plans, design modifications, and ensures compliance with federal and university policies and standards.
• Provides experienced analysis and evaluates new capabilities and emerging technologies; implements new systems and improves existing ones all the while making sure established protocols and procedures are followed.
• Applies comprehensive knowledge to bug reporting and isolation, test case authoring and refinement, automation scripts, and works closely with other teams (engineering, cross functional and cross campus) to resolve problems.
• Documents systems administration practices and processes (testing, upgrades/modifications, issue/problem resolution).
RequirementsEDUCATION
Required
• Bachelor's degree
Preferred
• Degree in computer science or related field
WORK EXPERIENCE
Required
• 2 years of systems administration or related experience
• Experience with one or more scripting languages
SKILLS
Required
• Proficient communication skills
• Maintains a high degree of professionalism
• Demonstrated time management and priority setting skills
• Demonstrates a high commitment to quality
• Possesses flexibility to work in a fast-paced, dynamic environment
• Seeks to acquire knowledge in area of specialty
• Highly thorough and dependable
• Demonstrates a high level of accuracy, even under pressure
• Thorough knowledge of virtualized computer systems, storage systems, backup systems, network systems, network protocol and software interfaces
• Ability to quickly troubleshoot and resolve moderately complex problems
Company DescriptionIndiana University Bloomington is a public research university in Bloomington, Indiana, United States. It is the flagship campus of Indiana University and its largest campus, with over 40,000 students.
Equal Employment Opportunity
Indiana University is an equal employment and affirmative action employer and a provider of ADA services. All qualified applicants will receive consideration for employment based on individual qualifications. Indiana University prohibits discrimination based on age, ethnicity, color, race, religion, sex, sexual orientation, gender identity or expression, genetic information, marital status, national origin, disability status or protected veteran status. Indiana University does not discriminate on the basis of sex in its educational programs and activities, including employment and admission, as required by Title IX. Questions or complaints regarding Title IX may be referred to the U.S. Department of Education Office for Civil Rights or the university Title IX Coordinator. See Indiana University’s Notice of Non-Discrimination here which includes contact information. https://policies.iu.edu/policies/ua-01-equal-opportunity-affirmative-action/index.html
·
·
2024-10-16
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Director, Design Verification
·
Lightmatter
·
Mountain View, CA and Boston, MA
Hide Details
SessionJob Postings
DescriptionAs the Design Verification Director, you will be responsible for all Design Verification-related activities for Lightmatter optical interconnect products.
Report directly to the VP of Photonics & Silicon Engineering. You will work closely with the Architecture, Digital, Analog, and photonics teams to accelerate the development of our Passage-based Optical Interconnect products. The overall DV tasks will include a mix of Digital, Analog Mixed-Signal, and photonic verification test bench development using state-of-the-art tools.
We are looking to welcome a leader with us who will shape the future of computing; join us!
Responsibilities
• Own the design verification details including the DV plan and execution for chip projects
• Drive state-of-the-art DV methodology and flow for optical interconnect products
• Work with architecture, hardware, software, systems, and program management teams to deliver state-of-the-art silicon products
• Organize and expertly drive planning, scheduling, and day-to-day execution to support the design team
• Collaborate with leads and engineering teams in effectively estimating and prioritizing tasks to maintain high design quality on a realistic delivery schedule
• Develop and lead project plans (scope, schedule, and budget) to ensure alignment with key partners and business needs
• Facilitate recurring design verification team meetings and operational checkpoint activities throughout the life cycle of projects
• Set clear and targeted communication to management on project information, including project plan, key dates, and project status
• Provide the required hands-on project management, technical guidance to DV team, cross-functional coordination, and internal and external team communications to deliver outstanding program outcomes
• Take responsibility for release schedules and milestones including keeping up a high velocity of execution progress in a fast-paced startup environment
• Accountable for individual and team results which impact multiple functions
• Provide leadership to develop a high-quality Design Verification team including recruiting to match the rapidly growing scope of Lightmatter’s engagements in this technology area
Report directly to the VP of Photonics & Silicon Engineering. You will work closely with the Architecture, Digital, Analog, and photonics teams to accelerate the development of our Passage-based Optical Interconnect products. The overall DV tasks will include a mix of Digital, Analog Mixed-Signal, and photonic verification test bench development using state-of-the-art tools.
We are looking to welcome a leader with us who will shape the future of computing; join us!
Responsibilities
• Own the design verification details including the DV plan and execution for chip projects
• Drive state-of-the-art DV methodology and flow for optical interconnect products
• Work with architecture, hardware, software, systems, and program management teams to deliver state-of-the-art silicon products
• Organize and expertly drive planning, scheduling, and day-to-day execution to support the design team
• Collaborate with leads and engineering teams in effectively estimating and prioritizing tasks to maintain high design quality on a realistic delivery schedule
• Develop and lead project plans (scope, schedule, and budget) to ensure alignment with key partners and business needs
• Facilitate recurring design verification team meetings and operational checkpoint activities throughout the life cycle of projects
• Set clear and targeted communication to management on project information, including project plan, key dates, and project status
• Provide the required hands-on project management, technical guidance to DV team, cross-functional coordination, and internal and external team communications to deliver outstanding program outcomes
• Take responsibility for release schedules and milestones including keeping up a high velocity of execution progress in a fast-paced startup environment
• Accountable for individual and team results which impact multiple functions
• Provide leadership to develop a high-quality Design Verification team including recruiting to match the rapidly growing scope of Lightmatter’s engagements in this technology area
RequirementsQualifications
• Master's degree in EE, CS, or CE or equivalent
• 15+ years of experience in semiconductor technologies and participation in multiple tapeouts with first pass silicon success
• Must have expertise in verification methodologies, including simulation, formal verification, and FPGA emulation
• Proficient in HDL (Verilog. VHDL) languages
• Proficient with scripting languages, preferably Python
• Experience working with the development of Real Number Models (RNM) for photonics and analog circuits
• 8+ years of experience with AMS verification with UVM to ensure precise model representation leading to the development of the Golden Reference Model (GRM) for design verification
• 8+ years of experience developing people and building effective teams
• Proven track record of success in verification strategy development and execution
• Results-oriented with solid teamwork skills with the ability to collaborate with multiple functional teams across a variety of fields
Preferred Qualifications
• Self-starter who thrives in a fast-paced, dynamic environment with multiple competing priorities and who finds satisfaction in being accountable for accomplishing results quickly and accurately
• Enthusiastic, responsive, and passionate about finding opportunities for process improvement
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-04
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
VP Sales, Datacenter
·
Lightmatter
·
Mountain View, CA or Boston, MA
Hide Details
SessionJob Postings
DescriptionAs the VP of Datacenter Sales, reporting directly to the Senior VP of Sales and Solution Architecture, you will be a key leader at the company during a period of rapid growth. You will be responsible for building and managing relationships with top executives among our target customers, connecting key stakeholders to the account to ensure Lightmatter’s solutions are fully understood by the client, and working with account teams to close and manage design wins.
Responsibilities
• Full ownership of target customers; you are the “CEO” of the accounts and the authoritative link between them and Lightmatter.
• Ability to drive strong, technical sales conversations that uncover core motivators for customer decision making, as well as overlay Lightmatter solutions to address those needs.
• Strong understanding of AI datacenter infrastructure, key performance metrics based on current AI workload/application trends, Hyperscaler TCO analysis methodology, and infrastructure procurement + deployment lifecycle.
• Leverage strong communication skills and close relationships with your Lightmatter support team to drive agreements to closure.
• Provide input as the "voice of the customer" into Lightmatter’s go-to-market strategy.
• Drive the long-term success of the US Enterprise and Datacenter team by being a collaborative leader among your peers.
• Constantly strive to improve and expand your capability and skills, as well as supporting your team for the same.
• Understand the selling/buying cycle within the target customer space and ensure Lightmatter is in sync with customer demand.
• Provide insights into customer demand, deployment timing and work with the finance team on predicting revenue ramp.
• Entrepreneurial attitude with high level of business acumen, with ability to regularly interact with C-level through engineering at the world’s largest semi and systems companies.
• Strong presentation skills; must be able to concisely and accurately convey key aspects of Lightmatter’s value proposition in an impactful manner.
Responsibilities
• Full ownership of target customers; you are the “CEO” of the accounts and the authoritative link between them and Lightmatter.
• Ability to drive strong, technical sales conversations that uncover core motivators for customer decision making, as well as overlay Lightmatter solutions to address those needs.
• Strong understanding of AI datacenter infrastructure, key performance metrics based on current AI workload/application trends, Hyperscaler TCO analysis methodology, and infrastructure procurement + deployment lifecycle.
• Leverage strong communication skills and close relationships with your Lightmatter support team to drive agreements to closure.
• Provide input as the "voice of the customer" into Lightmatter’s go-to-market strategy.
• Drive the long-term success of the US Enterprise and Datacenter team by being a collaborative leader among your peers.
• Constantly strive to improve and expand your capability and skills, as well as supporting your team for the same.
• Understand the selling/buying cycle within the target customer space and ensure Lightmatter is in sync with customer demand.
• Provide insights into customer demand, deployment timing and work with the finance team on predicting revenue ramp.
• Entrepreneurial attitude with high level of business acumen, with ability to regularly interact with C-level through engineering at the world’s largest semi and systems companies.
• Strong presentation skills; must be able to concisely and accurately convey key aspects of Lightmatter’s value proposition in an impactful manner.
RequirementsRequirements
• Bachelor's or master’s degree in Computer Engineering or Electrical Engineering
• MUST have 15+ years of Hyperscale/Datacenter system and/or semiconductor sales experience
• 5+ years of sales experience in AI/HPC/Network/Interconnect
• Demonstrable track record of revenue growth and ability to close greenfield opportunities
• Experience managing $100M+ contract negotiations and agreements
• Positive track record calling on Top 10 U.S. datacenter accounts
• History of working with internal Product and Engineering teams to drive engagements to closure
• Strong team-centric attitude and a team growth mindset
• Demonstrable record of being process oriented to navigate and close deals with large, complex corporations
• Must be able to travel ~20% of the time
Preferred Qualifications
• Previous experience as a team lead
• Excellent written and oral communications skills with the ability to effectively interface with management and engineering
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-04
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Electro-Optic Systems Architect
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are hiring an Electro-Optic Systems Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will contribute to the design and development of innovative photonics architecture solutions to deliver high-volume products for our customers. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. You may also test and measure prototypes to validate the models. This role requires a solid understanding of leading-edge CMOS, advanced silicon photonics, high-speed (40+ GHz) electro-optic interfaces, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company at technical conferences as an example of our technical leadership.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and contributing to high-performance electro-optic architecture solutions for high-volume products.
-Electro-optic modeling using Verilog-A, Cadence Spectre, Cadence Virtuoso, and Keysight ADS.
-Validate the models by designing experiments, conducting proof-of-concept experiments, and analyzing existing data.
-Actively collaborate with electronics and photonics engineers to specify, design, and validate circuits that meet performance, power, and production requirements.
-Document and present your contributions.
This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing.
We are hiring an Electro-Optic Systems Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will contribute to the design and development of innovative photonics architecture solutions to deliver high-volume products for our customers. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. You may also test and measure prototypes to validate the models. This role requires a solid understanding of leading-edge CMOS, advanced silicon photonics, high-speed (40+ GHz) electro-optic interfaces, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company at technical conferences as an example of our technical leadership.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and contributing to high-performance electro-optic architecture solutions for high-volume products.
-Electro-optic modeling using Verilog-A, Cadence Spectre, Cadence Virtuoso, and Keysight ADS.
-Validate the models by designing experiments, conducting proof-of-concept experiments, and analyzing existing data.
-Actively collaborate with electronics and photonics engineers to specify, design, and validate circuits that meet performance, power, and production requirements.
-Document and present your contributions.
This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing.
RequirementsQualifications
-Ph.D. degree in Electrical Engineering or similar discipline with at least 3 years of relevant experience, or Master’s degree with at least 6 years of relevant experience.
-Minimum 2 years of experience in silicon photonics electronic-photonic co-design and co-simulation.
-Minimum 2 years of experience in silicon photonic high-speed optical modulators and receivers.
-Proficient in Cadence Virtuoso, Cadence Spectre, Verilog-A, Keysight ADS.
-Highly proficient in coding simulations in Python or Matlab.
-Experience in RF/mm-wave analog circuit design.
-Experience modeling optical communication links.
-Experience with corner analysis and Monte Carlo analysis.
-Experience collaborating with photonics and electronics teams.
Preferred Qualifications
-Experience in product development or projects with industry.
-Silicon photonic integrated circuit design and test experience.
-High-speed electro-optic measurement experience.
-Knowledge of signal processing and signal integrity in optical communications.
-Knowledge of SERDES and mixed signal interfaces.
-Strong publication record
-Excellent written and verbal communication skills.
-Willingness and ability to learn quickly. Self-starter with a “no task is too big or small” attitude.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Staff Signal Integrity Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
Lightmatter is looking for a Signal Integrity engineer to drive the next generation products from concept to development. You will be part of a cross-functional systems and packaging team consisting of silicon, package, and systems engineers, tasked with driving the package and system architecture definition from concept through design optimization and completion to meet the product’s signal and power integrity requirements. This position offers a unique opportunity to innovate new package and system architectures by solving complex interaction across digital, analog and photonics domains while meeting stringent signal integrity goals.
Responsibilities
-Develop and implement signal integrity strategies for high-speed interfaces such as PCIe and SerDes.
-Perform signal integrity analysis and simulations to evaluate and optimize design choices, including eye diagram analysis, jitter analysis, and equalization techniques.
-Collaborate with cross-functional teams, including hardware designers, PCB layout engineers, and system architects, to provide signal integrity guidance and support throughout the product development lifecycle.
-Work closely with hardware designers to ensure proper impedance matching, termination, and routing techniques for high-speed SerDes interfaces, considering design constraints and manufacturability.
-Perform detailed signal integrity measurements and characterization using lab equipment, such as oscilloscopes, network analyzers, TDRs, and VNA, to validate design performance and troubleshoot issues.
-Participate in signal integrity design reviews, providing technical expertise and recommendations to ensure optimal signal integrity performance and adherence to design specifications.
-Analyze and mitigate electromagnetic interference (EMI) and electromagnetic compatibility (EMC) challenges associated with SERDES interfaces, ensuring compliance with industry standards and regulations.
-Document and communicate signal integrity analysis results, design guidelines, and recommendations to stakeholders.
Lightmatter is looking for a Signal Integrity engineer to drive the next generation products from concept to development. You will be part of a cross-functional systems and packaging team consisting of silicon, package, and systems engineers, tasked with driving the package and system architecture definition from concept through design optimization and completion to meet the product’s signal and power integrity requirements. This position offers a unique opportunity to innovate new package and system architectures by solving complex interaction across digital, analog and photonics domains while meeting stringent signal integrity goals.
Responsibilities
-Develop and implement signal integrity strategies for high-speed interfaces such as PCIe and SerDes.
-Perform signal integrity analysis and simulations to evaluate and optimize design choices, including eye diagram analysis, jitter analysis, and equalization techniques.
-Collaborate with cross-functional teams, including hardware designers, PCB layout engineers, and system architects, to provide signal integrity guidance and support throughout the product development lifecycle.
-Work closely with hardware designers to ensure proper impedance matching, termination, and routing techniques for high-speed SerDes interfaces, considering design constraints and manufacturability.
-Perform detailed signal integrity measurements and characterization using lab equipment, such as oscilloscopes, network analyzers, TDRs, and VNA, to validate design performance and troubleshoot issues.
-Participate in signal integrity design reviews, providing technical expertise and recommendations to ensure optimal signal integrity performance and adherence to design specifications.
-Analyze and mitigate electromagnetic interference (EMI) and electromagnetic compatibility (EMC) challenges associated with SERDES interfaces, ensuring compliance with industry standards and regulations.
-Document and communicate signal integrity analysis results, design guidelines, and recommendations to stakeholders.
RequirementsQualifications
-Bachelor’s degree in Electrical Engineering or relevant field.
-Minimum 8 years of relevant industry experience in signal integrity analysis for high speed interfaces (112G SerDes, PCIe Gen5).
-Proficiency in using simulation and analysis tools for signal integrity, including Ansys HFSS & Keysight ADS.
-Strong understanding of transmission line theory, impedance matching, and equalization techniques.
-Knowledge of signaling protocols for high speed interfaces including PCIe and SerDES.
-Experience with PCB design and layout considerations for high-speed interfaces, including layer stack-up, routing, and via design.
-Familiarity with electromagnetic compatibility (EMC) and electromagnetic interference (EMI) mitigation techniques.
-Proficiency in using lab equipment for signal measurement and characterization, such as oscilloscopes, network analyzers, TDRs, and VNA.
-Knowledge of industry-standard simulation models and methodologies for SERDES analysis, such as IBIS-AMI.
-Ability to write scripts to automate the simulation flow (python, javascripts, ocean scripts or skill).
Preferred Qualifications
-Strong problem-solving and debugging skills for signal integrity-related issues, including eye diagram analysis, jitter analysis, and equalization techniques.
-Experience with DDR and HBM integration.
Understanding of CMOS process technologies and their impact on signal integrity.
-Demonstrated excellent communication skills and ability to collaborate effectively in cross-functional teams.
-Ability to work independently and tackle projects with minimal supervision.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Sr./Principal Analog Architect
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are hiring a Sr./Principal Analog Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will contribute to designing and developing innovative analog architecture solutions to deliver high-volume products for our customers. As an architect, you will provide detailed technical documents that describe what needs to be built to the engineering team. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. This role requires a deep understanding of high-frequency systems-on-chip (SoCs), high-speed (40+ GHz) electro-optic interfaces, silicon photonics, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company at technical conferences as an example of our technical leadership.
You will report directly to the Chief Scientist and work closely with our digital, analog, photonic, and software teams. Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. -Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and owning the architectures, characterization plans, and packaging approaches for a successful high-volume product.
-Be the AMS architecture owner of Lightmatter’s Passage product. Work closely with the rest of the architecture team and the chip-design team.
-Lead direct collaboration with top tier semiconductor customer(s) throughout the product development process at the system architecture level.
-Architect high-speed (50G and beyond) circuit blocks for optical transceivers, such as drivers, TIAs, equalizers, ADC/DACs, PLLs, CDRs that interface with SerDes.
-Author, review, and validate architectural specifications. With the product and applications engineering teams, also produce customer-facing product datasheets and reference designs.
-Lead the development and drive methodologies and simulation workflows for electro-optic SoC co-design.
-Collaborate with the product team to develop our future technological and product roadmap in the context of industry trends.
-Work closely with test and validation engineers to validate hardware against simulation prediction to ensure high performance and high yield.
-Actively collaborate across disciplines—with electronics, photonics, and mechanical engineering teams— to specify the requirements and solutions for circuit blocks, SoCs, debug, and validation
-Publish and present novel ideas and participate in premier technical conferences
We are hiring a Sr./Principal Analog Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will contribute to designing and developing innovative analog architecture solutions to deliver high-volume products for our customers. As an architect, you will provide detailed technical documents that describe what needs to be built to the engineering team. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. This role requires a deep understanding of high-frequency systems-on-chip (SoCs), high-speed (40+ GHz) electro-optic interfaces, silicon photonics, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company at technical conferences as an example of our technical leadership.
You will report directly to the Chief Scientist and work closely with our digital, analog, photonic, and software teams. Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. -Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and owning the architectures, characterization plans, and packaging approaches for a successful high-volume product.
-Be the AMS architecture owner of Lightmatter’s Passage product. Work closely with the rest of the architecture team and the chip-design team.
-Lead direct collaboration with top tier semiconductor customer(s) throughout the product development process at the system architecture level.
-Architect high-speed (50G and beyond) circuit blocks for optical transceivers, such as drivers, TIAs, equalizers, ADC/DACs, PLLs, CDRs that interface with SerDes.
-Author, review, and validate architectural specifications. With the product and applications engineering teams, also produce customer-facing product datasheets and reference designs.
-Lead the development and drive methodologies and simulation workflows for electro-optic SoC co-design.
-Collaborate with the product team to develop our future technological and product roadmap in the context of industry trends.
-Work closely with test and validation engineers to validate hardware against simulation prediction to ensure high performance and high yield.
-Actively collaborate across disciplines—with electronics, photonics, and mechanical engineering teams— to specify the requirements and solutions for circuit blocks, SoCs, debug, and validation
-Publish and present novel ideas and participate in premier technical conferences
RequirementsQualifications
A Ph.D. degree in Electrical Engineering or similar discipline with at least 5+ years of relevant experience, or Master’s degree with at least 8+ years of relevant experience.
Minimum 8 years of experience in broadband, RF, and/or mm-wave design.
Strong understanding of signal processing and signal integrity in optical communication.
Highly proficient in designing high-speed and power-efficient optical transceivers in advanced CMOS.
Power user of simulation tools Cadence Virtuoso, Cadence Spectre, Verilog-A, IBIS-AMI simulators.
Familiarity with techniques to minimize design impact of PVT (process, voltage & temperature) variations and optimize the design for yield.
Excellent understanding of designs and layouts that optimize for speed and power while minimizing noise and crosstalk.
Proven track record of delivering successful high-volume silicon in the market. Also, proven understanding of silicon product development flow.
Excellent customer and vendor communication skills.
Preferred Qualifications
-Ability and desire to collaborate in a cross-disciplinary team.
-3 years of experience in electronic-photonic co-design and co-simulation for transceivers.
-Understanding of SerDes and mixed-signal interfaces
-Strong publication and/or patent record
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Photonics Design Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
In this role, you will design, develop, build, and test photonic systems that are key to our products. You will also interact with device, hardware, and software design teams to assist in the overall development of photonic systems, specifying, selecting, and qualifying the various components necessary.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and our innovative products! You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Design high-performance photonics devices suitable for high-volume manufacturing
-Actively collaborate across disciplines—with analog, digital, and physical design teams—to layout photonic devices for key test and product tape outs
-Work with the photonics test team in defining requirements for device characterization, debug and validation
-Analyze characterization data to ensure the device/system meets the specifications
-Engage in problem solving and contributing with novel architectural ideas
-Publish and novel ideas related to photonics, and participate in premier optics/photonics conferences
In this role, you will design, develop, build, and test photonic systems that are key to our products. You will also interact with device, hardware, and software design teams to assist in the overall development of photonic systems, specifying, selecting, and qualifying the various components necessary.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and our innovative products! You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Design high-performance photonics devices suitable for high-volume manufacturing
-Actively collaborate across disciplines—with analog, digital, and physical design teams—to layout photonic devices for key test and product tape outs
-Work with the photonics test team in defining requirements for device characterization, debug and validation
-Analyze characterization data to ensure the device/system meets the specifications
-Engage in problem solving and contributing with novel architectural ideas
-Publish and novel ideas related to photonics, and participate in premier optics/photonics conferences
RequirementsQualifications
-PhD in EECS, Physics, or related discipline with 3 years of experience
-Must have knowledge of photonic device physics
-Demonstrated track record of design integrated silicon photonics components using Lumerical, Tidy3D or equivalent
-Experience with code based GDS creation such as GDS Factory or similar
-Experience with large scale layout, testing and characterization of photonic devices
-Experience with PDK and Cadence environments a plus
-Ability to collaborate in cross-disciplinary team
-Excellent written and verbal communication skills
-Demonstrate strong problem solving skills in problems that do not have obvious solutions
-Can-do, positive attitude
-Strong publication record in the photonics field a plus
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Staff Laser Test Engineer
·
Lightmatter
·
Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
Lightmatter builds chips that enable extreme-scale artificial intelligence computing clusters. If you're a collaborative engineer or scientist who has a passion for innovation, solving challenging technical problems and doing impactful work like building the world's first optical computers, consider joining the team at Lightmatter!
In this role as a part of the product engineering team you will be focused on High Volume Manufacturing (HVM) testing, developing novel wafer and package level testing strategies of Lightmatter’s novel laser products. You will also help in setting up validation capabilities and carrying out validation tests. You will be expected to support customer deployment and related technical issues.
Responsibilities
-Work closely with architecture, design, product engineering, and external vendors to understand expected component capabilities and testing needs.
-Identify challenges and close the design feedback loop with early prototyping and feasibility studies to optimize performance.
-Develop and implement test protocols to evaluate and compare different engineering approaches.
-Maintain testing equipment and develop automations to improve efficiency.
-Analyze test data, document results, and provide feedback to the development team.
-Document and communicate (orally and in written form) methodologies, data, and analyses.
-Bring-up of laser test equipment and laser modules.
-Measure impact of laser on performance of data links such as BERT and OSNR.
-Must be able to travel to Palo Alto
Lightmatter builds chips that enable extreme-scale artificial intelligence computing clusters. If you're a collaborative engineer or scientist who has a passion for innovation, solving challenging technical problems and doing impactful work like building the world's first optical computers, consider joining the team at Lightmatter!
In this role as a part of the product engineering team you will be focused on High Volume Manufacturing (HVM) testing, developing novel wafer and package level testing strategies of Lightmatter’s novel laser products. You will also help in setting up validation capabilities and carrying out validation tests. You will be expected to support customer deployment and related technical issues.
Responsibilities
-Work closely with architecture, design, product engineering, and external vendors to understand expected component capabilities and testing needs.
-Identify challenges and close the design feedback loop with early prototyping and feasibility studies to optimize performance.
-Develop and implement test protocols to evaluate and compare different engineering approaches.
-Maintain testing equipment and develop automations to improve efficiency.
-Analyze test data, document results, and provide feedback to the development team.
-Document and communicate (orally and in written form) methodologies, data, and analyses.
-Bring-up of laser test equipment and laser modules.
-Measure impact of laser on performance of data links such as BERT and OSNR.
-Must be able to travel to Palo Alto
RequirementsQualifications
-Master’s in Photonics, Physics, Electrical Engineering, or a related field
-3-6 years of experience as a laser test engineer
-Experience in characterization of lasers and SOAs, including measurements of the optical spectrum, noise characterization, thermal dependence and reliability
-Experience in die-level, board-level, and wafer-scale testing.
-Experience working with test equipment including benchtop lasers, VNA, switches, OSA, ESA, BERT, optical and electrical spectrum analyzers, high speed oscilloscopes (DCA), source-meter units, reference transmitters, and receivers
-Proficiency in programming languages such as Python or LabView for test automation.
-Strong analytical and problem solving skills to identify and solve issues during the testing process.
-Ability to convey complex technical concepts to both technical and non-technical stakeholders with clear communication.
Preferred Qualifications
-PhD in Photonics, Physics, Electrical Engineering, or a related field
-2+ years of experience as a laser test engineer
-Laser Test HW/fixture development experience is highly desirable.
-Experience testing packaged laser modules and/or transceivers for datacom applications.
-Experience working in HVM environment from NPI to mass production.
-Experience bringing up lab equipment and experimental setups for testing lasers/SOAs.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Sr./Principal Mixed Signal Digital Architect
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Job
We are hiring a Sr./Principal Mixed-Signal Digital Architect. The selected candidate will partner with external facing teams at Lightmatter and the internal engineering team to deliver groundbreaking products to the market. As an architect, you will provide detailed technical documents that describe what needs to be built to the engineering team. This will require a solid understanding of process technology, state of the art in chip design, and clear communication skills. You will represent the company externally at technical conferences as an example of our technical leadership.
We are seeking a motivated and dedicated hands-on Digital Architect to help develop chips for our next-generation communication architecture alongside a team of world-class scientists and engineers.
In this role, you will define the architecture of the digital circuits that enable a groundbreaking photonic-based communication fabric. Your work will not only enable the AMS/photonic devices at low-level but also the necessary interfaces with the firmware/software. You will interact directly with our vendors and customers. You will also define the product and technological roadmaps along with suitable design and verification methodologies.
You will report directly to the Chief Scientist and work closely with our digital, analog, photonic, and software teams. We are particularly looking for someone with expertise in architecting digital circuits to enable high-speed AMS circuits, with a strong preference for someone with experience in photonic circuits.
Join a tight-knit team where each individual’s contributions directly influence the company's and product's success. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Be the digital architecture owner of Lightmatter’s Passage product. Work closely with the rest of the architecture team, chip-design team, and the software team.
-Responsible for the digital architecture from initial concept through high-volume production.
-Work closely with the technical program manager(s) to plan, organize, and execute.
-Lead direct collaboration with top tier semiconductor customer(s) throughout the product development process at the system architecture level.
-Architect and specify system level digital designs integrating complex AMS and photonic circuits. -These circuits include high-speed drivers and receivers, SerDes, ADC/DAC, photonic modulators and detectors.
-Collaborate with the product team to develop our future technological and product roadmap.
-Work closely with test and validation engineers to validate hardware against simulation prediction.
-Actively collaborate with the RTL and DV teams to enable block-level implementation and verification, respectively.
-Author, review, and validate architectural specifications. With the product and applications engineering teams, also produce customer-facing product datasheet and reference designs.
-Identify possible improvements as well as possible pitfalls (including mitigation strategies) around executing the proposed architectures.
We are hiring a Sr./Principal Mixed-Signal Digital Architect. The selected candidate will partner with external facing teams at Lightmatter and the internal engineering team to deliver groundbreaking products to the market. As an architect, you will provide detailed technical documents that describe what needs to be built to the engineering team. This will require a solid understanding of process technology, state of the art in chip design, and clear communication skills. You will represent the company externally at technical conferences as an example of our technical leadership.
We are seeking a motivated and dedicated hands-on Digital Architect to help develop chips for our next-generation communication architecture alongside a team of world-class scientists and engineers.
In this role, you will define the architecture of the digital circuits that enable a groundbreaking photonic-based communication fabric. Your work will not only enable the AMS/photonic devices at low-level but also the necessary interfaces with the firmware/software. You will interact directly with our vendors and customers. You will also define the product and technological roadmaps along with suitable design and verification methodologies.
You will report directly to the Chief Scientist and work closely with our digital, analog, photonic, and software teams. We are particularly looking for someone with expertise in architecting digital circuits to enable high-speed AMS circuits, with a strong preference for someone with experience in photonic circuits.
Join a tight-knit team where each individual’s contributions directly influence the company's and product's success. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Be the digital architecture owner of Lightmatter’s Passage product. Work closely with the rest of the architecture team, chip-design team, and the software team.
-Responsible for the digital architecture from initial concept through high-volume production.
-Work closely with the technical program manager(s) to plan, organize, and execute.
-Lead direct collaboration with top tier semiconductor customer(s) throughout the product development process at the system architecture level.
-Architect and specify system level digital designs integrating complex AMS and photonic circuits. -These circuits include high-speed drivers and receivers, SerDes, ADC/DAC, photonic modulators and detectors.
-Collaborate with the product team to develop our future technological and product roadmap.
-Work closely with test and validation engineers to validate hardware against simulation prediction.
-Actively collaborate with the RTL and DV teams to enable block-level implementation and verification, respectively.
-Author, review, and validate architectural specifications. With the product and applications engineering teams, also produce customer-facing product datasheet and reference designs.
-Identify possible improvements as well as possible pitfalls (including mitigation strategies) around executing the proposed architectures.
RequirementsQualifications
-12+ years of related experience with a Bachelors’ degree, or 8+ years and a Masters’ degree in Electrical or Computer Engineering.
-In-depth knowledge of power-efficient digital design flow for high-speed AMS electronics, preferably on photonic communication channels.
-Proven track record of delivering successful high-volume silicon in the market.
-Proven understanding of silicon product development flow.
-In-depth knowledge of digital circuit design in a high-speed analog/photonic link, including device bringup, control, and digital signal processing.
-Demonstrated strong problem-solving skills, specifically pertaining to problems that do not have obvious solutions.
-Excellent customer and vendor communication skills.
Preferred Qualifications
-Understanding of SerDes PCS layer and equalization along with forward error-correction schemes.
-Ability and desire to collaborate in a cross-disciplinary team.
-Power user of multiple development tools, including CAD tools, SPICE simulators, IBIS-AMI simulators, and general programming languages (e.g., MATLAB or Python).
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Senior Software Engineer- API
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
In this role, you will develop the API that bridges photonic hardware and advanced interface systems, ensuring seamless communication between hardware and software. Your work will enable efficient processing of real-time and historical data, playing a key part in driving innovation within photonic AI and computing. By creating the foundation for dynamic data transfer, you will contribute to the future growth and breakthroughs in this cutting-edge technology.
Responsibilities
-API Design & Development: Architect, build, and maintain APIs to support the flow of live data and historical data (from a database) between backend services and the frontend Next.js application, focusing on dynamic data fetching.
-Hardware Monitoring & Management: Focus on developing APIs that represent hardware systems, providing essential capabilities for monitoring and management of those systems.
-Backend Service Development: Design and implement scalable backend services, ensuring efficient communication between microservices and the frontend.
-Security & Authentication: Implement robust security protocols such as OAuth and JWT to ensure secure access to APIs. Emphasize data encryption and secure API endpoints.
-Real-Time Data Processing: Develop solutions for handling real-time data, integrating technologies like WebSockets to enable low-latency data transmission between backend services and the frontend.
-Collaboration with Front-End Teams: Work closely with the frontend team to ensure seamless integration of backend APIs with the application, particularly in the areas of dynamic data fetching and processing.
-Testing & Documentation: Write automated tests to ensure the reliability and performance of APIs. Maintain clear and concise documentation for API usage to support the frontend and internal teams.
-Performance Optimization: Continuously monitor and optimize API performance to reduce response times and improve system scalability.
In this role, you will develop the API that bridges photonic hardware and advanced interface systems, ensuring seamless communication between hardware and software. Your work will enable efficient processing of real-time and historical data, playing a key part in driving innovation within photonic AI and computing. By creating the foundation for dynamic data transfer, you will contribute to the future growth and breakthroughs in this cutting-edge technology.
Responsibilities
-API Design & Development: Architect, build, and maintain APIs to support the flow of live data and historical data (from a database) between backend services and the frontend Next.js application, focusing on dynamic data fetching.
-Hardware Monitoring & Management: Focus on developing APIs that represent hardware systems, providing essential capabilities for monitoring and management of those systems.
-Backend Service Development: Design and implement scalable backend services, ensuring efficient communication between microservices and the frontend.
-Security & Authentication: Implement robust security protocols such as OAuth and JWT to ensure secure access to APIs. Emphasize data encryption and secure API endpoints.
-Real-Time Data Processing: Develop solutions for handling real-time data, integrating technologies like WebSockets to enable low-latency data transmission between backend services and the frontend.
-Collaboration with Front-End Teams: Work closely with the frontend team to ensure seamless integration of backend APIs with the application, particularly in the areas of dynamic data fetching and processing.
-Testing & Documentation: Write automated tests to ensure the reliability and performance of APIs. Maintain clear and concise documentation for API usage to support the frontend and internal teams.
-Performance Optimization: Continuously monitor and optimize API performance to reduce response times and improve system scalability.
RequirementsQualifications
-BS and 12+ years of experience or MS and 8+ years of experience
-Proficiency in API development using systems/environments focused on API design and performance optimization (e.g., OpenAPI, RESTful services, API Gateway, microservices architecture).
-Strong understanding of API design principles and best practices for building scalable, secure, and high-performance services.
-Experience with security protocols (OAuth, JWT) and data encryption techniques.
-Expertise in real-time data handling, including WebSockets or similar technologies.
Preferred Qualifications
-Experience building backend services for web applications like Next.js.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Staff Power Integrity Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are defining what computers and human beings are capable of by delivering a fundamentally new kind of computer that calculates using light. Transistors, the workhorse of modern computers, aren't improving at the rate they once were. To scale AI, companies are building increasingly large and energy-expensive data centers—a path that is neither financially nor environmentally sustainable. By delivering ultra-high performance and energy efficiency, Lightmatter’s compute engine will power rapid advancements in AI while lowering its environmental footprint.
Lightmatter is looking for a Senior Power Integrity engineer to drive the next generation products from concept to development. You will be part of the cross-functional systems and packaging team consisting of silicon, package and systems engineers and tasked with driving the package and system architecture definition from concept, through design optimization and completion to meet the product’s power integrity requirements. This position offers a unique opportunity to innovate new package and system architectures by solving complex interaction across digital, analog and photonics domains while meeting stringent signal and power integrity goals.
Responsibilities
-Design the end-to-end power delivery network (PDN) for digital and analog SoC domains, from the voltage regulator to the on-chip decoupling.
-Model voltage regulator switching ripple, bandwidth, and transient response and aid in the selection of power stages, controllers, and associated passives.
-Simulate PCBs and packages in commercial 2.5D electromagnetics simulators to extract N-port models and subsequently, impedance profiles.
-Perform time domain simulations of supply rails to characterize worst case droops and overshoots.
-Perform detailed power integrity measurements and characterization using lab equipment, for example, using programmable loads to measure the transient response of the power delivery network.
-Collaborate with cross-functional teams such as hardware designers, PCB layout engineers, system architects, and power and performance engineers to provide power integrity guidance and support throughout the product development lifecycle.
-Contribute to power integrity design reviews, providing technical expertise and recommendations to ensure optimal power integrity performance, and adherence to design specifications.
-Document and communicate analysis results, design guidelines, and recommendations to stakeholders, including design teams, management, and customers.
-Participate in cross-functional teams to define and influence system-level architecture decisions that impact power integrity, ensuring optimal performance and scalability.
We are defining what computers and human beings are capable of by delivering a fundamentally new kind of computer that calculates using light. Transistors, the workhorse of modern computers, aren't improving at the rate they once were. To scale AI, companies are building increasingly large and energy-expensive data centers—a path that is neither financially nor environmentally sustainable. By delivering ultra-high performance and energy efficiency, Lightmatter’s compute engine will power rapid advancements in AI while lowering its environmental footprint.
Lightmatter is looking for a Senior Power Integrity engineer to drive the next generation products from concept to development. You will be part of the cross-functional systems and packaging team consisting of silicon, package and systems engineers and tasked with driving the package and system architecture definition from concept, through design optimization and completion to meet the product’s power integrity requirements. This position offers a unique opportunity to innovate new package and system architectures by solving complex interaction across digital, analog and photonics domains while meeting stringent signal and power integrity goals.
Responsibilities
-Design the end-to-end power delivery network (PDN) for digital and analog SoC domains, from the voltage regulator to the on-chip decoupling.
-Model voltage regulator switching ripple, bandwidth, and transient response and aid in the selection of power stages, controllers, and associated passives.
-Simulate PCBs and packages in commercial 2.5D electromagnetics simulators to extract N-port models and subsequently, impedance profiles.
-Perform time domain simulations of supply rails to characterize worst case droops and overshoots.
-Perform detailed power integrity measurements and characterization using lab equipment, for example, using programmable loads to measure the transient response of the power delivery network.
-Collaborate with cross-functional teams such as hardware designers, PCB layout engineers, system architects, and power and performance engineers to provide power integrity guidance and support throughout the product development lifecycle.
-Contribute to power integrity design reviews, providing technical expertise and recommendations to ensure optimal power integrity performance, and adherence to design specifications.
-Document and communicate analysis results, design guidelines, and recommendations to stakeholders, including design teams, management, and customers.
-Participate in cross-functional teams to define and influence system-level architecture decisions that impact power integrity, ensuring optimal performance and scalability.
RequirementsQualifications
-Master's in Electrical Engineering or relevant field with 6 years of relevant experience, or Bachelor’s degree with 8 years of related experience in power delivery and power integrity analysis for high performance CPUs and compute SoCs.
-Proficiency in using simulation and analysis tools for power integrity, including PowerSI, PowerDC, H-Spice, and ADS.
-Experience with DC-DC converter design.
-Experience with PCB design and layout considerations for power integrity, including layer stack-up assignment, power plane partitioning, capacitor selection and placement, and microvia/PTH design.
-Strong problem-solving and debugging skills for power integrity-related issues.
-Excellent communication skills, both verbal and written, to effectively collaborate with cross-functional teams and present findings to stakeholders.
-Ability to work independently and manage multiple projects simultaneously.
Preferred Qualifications
-Working knowledge of different 2D/2.5D/3D packaging technologies and associated PDN challenges.
-Familiarity in using lab equipment for signal measurement and characterization, such as oscilloscopes and network analyzers.
-Proficiency in scripting languages such as SKILL, TCL, Python, or Matlab for automation of design and modeling flows and post-processing results.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Sr. Staff Analog IC Design Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
In this role, you will be a key member of the analog team architecting the world's first photonic computers. This is fundamentally an interdisciplinary team, and outside-the-box thinking is the daily norm. You will need both deep expertise in mixed-signal and analog design and breadth across other engineering disciplines, such as photonics and digital electronics, semiconductor device physics, thermal/packaging, and machine learning. The successful candidate will demonstrate an eagerness to refresh and grow their competency in these areas.
*We are currently hiring for multiple levels for this role. Your level and compensation will be determined by your experience, education, and location.
Responsibilities
-Participate in the architecture development with our chip architects by leading feasibility studies
-Collaborate with other design engineering teams (system, packaging, digital, photonics) to translate system-level requirements into electrical design specifications
-Lead the analog design and top-level integration of integrated circuits development
-Design complex analog and mixed-signal circuits
-Create technical reports and design specifications documents
-Plan DFT strategy and support its design implementation
-Oversee the performance validation of the analog circuits in the lab
-Support production test development and ramp to production
In this role, you will be a key member of the analog team architecting the world's first photonic computers. This is fundamentally an interdisciplinary team, and outside-the-box thinking is the daily norm. You will need both deep expertise in mixed-signal and analog design and breadth across other engineering disciplines, such as photonics and digital electronics, semiconductor device physics, thermal/packaging, and machine learning. The successful candidate will demonstrate an eagerness to refresh and grow their competency in these areas.
*We are currently hiring for multiple levels for this role. Your level and compensation will be determined by your experience, education, and location.
Responsibilities
-Participate in the architecture development with our chip architects by leading feasibility studies
-Collaborate with other design engineering teams (system, packaging, digital, photonics) to translate system-level requirements into electrical design specifications
-Lead the analog design and top-level integration of integrated circuits development
-Design complex analog and mixed-signal circuits
-Create technical reports and design specifications documents
-Plan DFT strategy and support its design implementation
-Oversee the performance validation of the analog circuits in the lab
-Support production test development and ramp to production
RequirementsQualifications
-MS with 8+ years of relevant experience, OR Ph.D. with 5+ years of experience
-3+ years of experience in the Semiconductor industry experience in Analog/Mixed-signal IC design
-Solid fundamentals of CMOS device characteristics, noise, mismatch, linearity, and design trade-offs
-Experience designing complex blocks such as data converters, control-loop circuits used in power-delivery, high-speed interface circuits
Preferred Qualifications
-Experience in advanced FinFET process nodes is a plus
-Experience with Cadence Design Environment
-Hands-on experience in the lab with silicon evaluation and validation
-Ability to work collaboratively with people across multiple functional areas
-Explains advanced concepts and understands how those concepts relate to other disciplines
-Works in projects at all levels of the organization and serves as an external spokesperson
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Senior Technical Program Manager
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout the job
We are hiring a Senior Technical Program Manager to help coordinate the engineering effort behind Lightmatter’s AI and Chiplet Interconnect products. You will lead program management for Lightmatter’s silicon hardware engineering team from design through production, collaborating with engineering teams across the company to plan and drive the team's objectives. This role will span silicon development, packaging, assembly, and test phases.
In this role, you will collaborate constantly with engineering teams across the company to plan and drive objectives for the group. You will gather requirements, drive quality initiatives, manage technical risks, and plan for all phases of the silicon design process. In addition, you’ll partner with executives and engineering leadership to develop and manage milestones and schedules for the many moving parts that need to come together.
Responsibilities
-Organize and expertly drive planning, scheduling, and day-to-day execution to support Lightmatter’s silicon, package and systems development
-Collaborate with leads and engineering teams in effectively estimating and prioritizing tasks in order to maintain excellent quality on a realistic delivery schedule
-Develop and lead project plans (scope, schedule, and budget) to ensure alignment with key partners and business needs
-Manage project schedules and quality, identify possible issues and risks, and clearly communicate them to project stakeholders
-Facilitate recurring project meetings and operational checkpoint activities throughout the life cycle of projects
-Set clear and targeted communication to management of project information, including project plan, key dates, and project status
-Provide the required hands-on project management, cross-functional coordination, and internal and external team communications to deliver outstanding program outcomes
-Take responsibility for release schedules and milestones in a fast-paced environment
-Opportunities to engage with vendors and customers in external program
We are hiring a Senior Technical Program Manager to help coordinate the engineering effort behind Lightmatter’s AI and Chiplet Interconnect products. You will lead program management for Lightmatter’s silicon hardware engineering team from design through production, collaborating with engineering teams across the company to plan and drive the team's objectives. This role will span silicon development, packaging, assembly, and test phases.
In this role, you will collaborate constantly with engineering teams across the company to plan and drive objectives for the group. You will gather requirements, drive quality initiatives, manage technical risks, and plan for all phases of the silicon design process. In addition, you’ll partner with executives and engineering leadership to develop and manage milestones and schedules for the many moving parts that need to come together.
Responsibilities
-Organize and expertly drive planning, scheduling, and day-to-day execution to support Lightmatter’s silicon, package and systems development
-Collaborate with leads and engineering teams in effectively estimating and prioritizing tasks in order to maintain excellent quality on a realistic delivery schedule
-Develop and lead project plans (scope, schedule, and budget) to ensure alignment with key partners and business needs
-Manage project schedules and quality, identify possible issues and risks, and clearly communicate them to project stakeholders
-Facilitate recurring project meetings and operational checkpoint activities throughout the life cycle of projects
-Set clear and targeted communication to management of project information, including project plan, key dates, and project status
-Provide the required hands-on project management, cross-functional coordination, and internal and external team communications to deliver outstanding program outcomes
-Take responsibility for release schedules and milestones in a fast-paced environment
-Opportunities to engage with vendors and customers in external program
RequirementsQualifications
-8+ years of experience in semiconductor technologies and participation in multiple tapeouts
Bachelor’s degree in a technical field
-Experience with silicon design, package, and system semiconductor technologies
-Hands-on experience in hardware program/project management
-Experience influencing decisions and leading teams in a matrix environment
-Attention to detail and strong ability to multitask in an ever-changing fast-paced environment
-Proven ability to identify and implement process improvements, characterized by a high level of responsiveness and passion for enhancing operational efficiency
-Expert user of program management tools such as MS-Project, Asana, Smartsheet, or similar
-Excellent communications and technical presentation skills
-Strong teamwork skills with the ability to collaborate with multiple functional teams across a variety of fields
Preferred Qualifications
-Knowledge of photonic structures, optical communication, and laser solutions
-Experience setting up project management initiatives from scratch in either a startup or a new business unit environment
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Senior Infrastructure Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
As an infrastructure engineer, you will play a critical role in developing, managing, deploying, and supporting the infrastructure used by Lightmatter as the company develops the next generation of computing technology. This role will put you at the intersection of hardware and software, and the forefront of the emerging field of photonic computing.
Come and help us build high-performance photonic systems. In this role, you will be responsible for deploying and providing support for critical compute, storage, networking, and tooling infrastructure for Lightmatter.
Responsibilities
-Evolve the infrastructure: Develop and manage our compute, storage, and networking infrastructure to meet the evolving needs of our R&D, engineering, sales, and operations teams.
-Support end-to-end Infrastructure at Lightmatter: Maintain Lightmatter’s infrastructure strategy and security, interfacing with 3rd party service providers as needed, to ensure a best-in-class experience for all Lightmatter employees.
-Collaborate with EDA companies: Work closely with Electronic Design Automation (EDA) companies to provide Lightmatter engineers with best-in-class tools and support.
-Support innovative methodologies and flows: Assist in the automation, development, deployment, and support of innovative verification, physical design methodology, and CAD flows.
-Manage tools and licenses: Oversee the management of tool licenses and versions, ensuring that any tool-related issues are quickly and efficiently resolved.
-Partner with internal and external teams: Collaborate with our internal infrastructure team and cloud partners to ensure that all Lightmatter teams receive effective support when issues arise.
-Work with the infrastructure leadership team to identify improvements in Infrastructure as LM grows
As an infrastructure engineer, you will play a critical role in developing, managing, deploying, and supporting the infrastructure used by Lightmatter as the company develops the next generation of computing technology. This role will put you at the intersection of hardware and software, and the forefront of the emerging field of photonic computing.
Come and help us build high-performance photonic systems. In this role, you will be responsible for deploying and providing support for critical compute, storage, networking, and tooling infrastructure for Lightmatter.
Responsibilities
-Evolve the infrastructure: Develop and manage our compute, storage, and networking infrastructure to meet the evolving needs of our R&D, engineering, sales, and operations teams.
-Support end-to-end Infrastructure at Lightmatter: Maintain Lightmatter’s infrastructure strategy and security, interfacing with 3rd party service providers as needed, to ensure a best-in-class experience for all Lightmatter employees.
-Collaborate with EDA companies: Work closely with Electronic Design Automation (EDA) companies to provide Lightmatter engineers with best-in-class tools and support.
-Support innovative methodologies and flows: Assist in the automation, development, deployment, and support of innovative verification, physical design methodology, and CAD flows.
-Manage tools and licenses: Oversee the management of tool licenses and versions, ensuring that any tool-related issues are quickly and efficiently resolved.
-Partner with internal and external teams: Collaborate with our internal infrastructure team and cloud partners to ensure that all Lightmatter teams receive effective support when issues arise.
-Work with the infrastructure leadership team to identify improvements in Infrastructure as LM grows
RequirementsQualifications
-Bachelor’s/associate degree in Software, Computer Engineering, Computer Science, Electrical Engineering or related field
-5 years of industry experience in cloud development or networking
-3+ years of experience scripting or programming in Python
-Proven track record in architecting, developing, and deploying secure compute, storage, and networking infrastructure in a cloud environment.
-Demonstrated experience in building strong relationships, working closely with external partners developing, and supporting infrastructure, flows, CAD and EDA tools.
-Proficient in hardware development or software development
Preferred Qualifications
-Master’s degree in Software, Computer Engineering, Computer Science, Electrical Engineering or related field
-Electronic Design Automation (EDA) experience.
-3 years of industry experience in cloud development or networking
-Previous experience working in a startup environment
-Adept at managing suppliers, with an ability to negotiate and liaise effectively.
-Familiar with scripting or programming in Bash and Terraform
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
High Speed Test Characterization Photonics Engineer
·
Lightmatter
·
Boston, MA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are seeking a highly skilled High-Speed Photonics Engineer with extensive hands-on experience in lab testing high-speed optical links and photonic modulators. The ideal candidate will have a strong background in waveguide-based high-speed E/O and O/E components, and will be proficient in debugging, data analysis, and RF simulation tools.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product! You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
*Due to our characterization lab being in Boston, the person for this role will need to be located in the Boston area.*
Responsibilities
-Bring up and characterize high-speed optical interconnect packages and components, such as modulators and photodetectors.
-Use high-speed test components and equipment, including high-speed DCAs, BERTs, AWGs, VNAs, ESAs, OSAs, signal sources, electrical high-speed amplifiers, and filters.
-Demonstrate working knowledge of different equalization techniques, e.g., CTLE, DFE, FFE, in high-speed optical links.
-Utilize working knowledge of high-speed Signal Integrity eye diagrams, Bathtub Curves, etc.
-Identify hardware issues and work towards resolutions with teams specializing in integrated optics, high-speed electronics, and opto-electronic packaging.
-Use RF simulation tools like HFSS, Keysight ADS, or equivalents.
-Familiarity with finite element analysis of photonics components using tools like Lumerical, Tidy3D, Synopsis OptoDesigner, etc.
-Program in languages such as Python and MATLAB.
-Work collaboratively across various functional teams within the company in a highly interdisciplinary environment.
We are seeking a highly skilled High-Speed Photonics Engineer with extensive hands-on experience in lab testing high-speed optical links and photonic modulators. The ideal candidate will have a strong background in waveguide-based high-speed E/O and O/E components, and will be proficient in debugging, data analysis, and RF simulation tools.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product! You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
*Due to our characterization lab being in Boston, the person for this role will need to be located in the Boston area.*
Responsibilities
-Bring up and characterize high-speed optical interconnect packages and components, such as modulators and photodetectors.
-Use high-speed test components and equipment, including high-speed DCAs, BERTs, AWGs, VNAs, ESAs, OSAs, signal sources, electrical high-speed amplifiers, and filters.
-Demonstrate working knowledge of different equalization techniques, e.g., CTLE, DFE, FFE, in high-speed optical links.
-Utilize working knowledge of high-speed Signal Integrity eye diagrams, Bathtub Curves, etc.
-Identify hardware issues and work towards resolutions with teams specializing in integrated optics, high-speed electronics, and opto-electronic packaging.
-Use RF simulation tools like HFSS, Keysight ADS, or equivalents.
-Familiarity with finite element analysis of photonics components using tools like Lumerical, Tidy3D, Synopsis OptoDesigner, etc.
-Program in languages such as Python and MATLAB.
-Work collaboratively across various functional teams within the company in a highly interdisciplinary environment.
RequirementsQualifications
-PhD in Electrical Engineering, Photonics, or a related field.
-Minimum of 5 years of relevant experience in high-speed photonics engineering.
-Demonstrated experience in silicon photonics components, high-speed modulators, and high-speed optical interconnect.
-Strong analytical skills with the ability to perform detailed analysis and data interpretation.
-Proven track record of extensive lab work, particularly with high-speed instruments and integrated photonics chips or systems.
-Hands-on laboratory experience in setting up and characterizing high-speed optical communication systems.
-Strong verbal and written communication and documentation skills.
-Experience with Python tools for scientific computing and lab automation.
-Experience with active microwave circuits and RF and analog systems.
-Experience in high-speed circuit simulation and characterization using Cadence Spectre and 3D EM simulators like Keysight ADS and HFSS a plus.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
ML SoC Architect
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
In this role, you will be defining the SoC architecture for a ground-breaking platform that integrates our high performance silicon-photonics-based network fabric with a custom machine learning accelerator. You will also work closely with our software, hardware and photonics teams to define and optimize the features needed to accelerate the next generation of machine learning algorithms.
The SoC Architect will create a blueprint for a Passage-first compute solution. This cross-disciplinary project brings together our experts in Software, Photonics, Systems, Packaging, Networking, and SoC Design to solve key design challenges in building computing solutions with Passage across servers and racks.
Responsibilities
-Define the SoC architecture for Lightmatter's next-gen product
-Author, review, and validate architectural specifications
-Actively collaborate with the hardware and software teams to perform trade-off analysis between power and performance for AI workloads on the proposed architecture
-With other architects, engage in problem solving and contributing with novel architectural ideas
In this role, you will be defining the SoC architecture for a ground-breaking platform that integrates our high performance silicon-photonics-based network fabric with a custom machine learning accelerator. You will also work closely with our software, hardware and photonics teams to define and optimize the features needed to accelerate the next generation of machine learning algorithms.
The SoC Architect will create a blueprint for a Passage-first compute solution. This cross-disciplinary project brings together our experts in Software, Photonics, Systems, Packaging, Networking, and SoC Design to solve key design challenges in building computing solutions with Passage across servers and racks.
Responsibilities
-Define the SoC architecture for Lightmatter's next-gen product
-Author, review, and validate architectural specifications
-Actively collaborate with the hardware and software teams to perform trade-off analysis between power and performance for AI workloads on the proposed architecture
-With other architects, engage in problem solving and contributing with novel architectural ideas
RequirementsRequirements
-PhD degree in Computer Science, Computer Engineering, or Electrical Engineering +8 years of industry experience, or MS degree +12 years of industry experience
-8+ years of experience in compute ASIC architecture with an emphasis on low-power design
-5+ years of experience with processor design, accelerators, networks, and/or memory hierarchies
-Previous experience leading and collaborating in a cross-disciplinary team in a variety of fields
-Previous experience in a fast-paced (startup) environment, with the ability to react to change
-Experience influencing decisions in a matrix environment
Preferred Qualifications
-Experience in leading a low-power architecture project from start to commercial volume chip production in an advanced node
-Experience in post-silicon measurement with the goal of correlating with pre-silicon power forecast
-Demonstrated technical innovations with impact on AI or high-performance computing
-Demonstrated strong problem-solving skills in problems that do not have obvious solutions
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Laser Architect
·
Lightmatter
·
Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are hiring a Laser Architect to join our team. In this role, you will develop integrated and highly scalable laser solutions for AI computation. You will work closely with cross-disciplinary engineering teams both internally and across our ecosystem of manufacturing partners to identify requirements, deliver engineering specifications, execute new product designs for laser solutions, and validate system performance on hardware. This role requires a deep understanding of laser physics as well as integrated photonics and the requirements of datacom systems. You will navigate the design space of performance, schedule, reliability, power consumption, and cost, working in tandem with stakeholders across the company to identify optimal and innovative design solutions. Creativity, analytical skills, and clear communication skills are a necessity.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build innovative laser solutions from the ground up and tackle groundbreaking challenges. Work with people who love to build and who thrive in diverse technical environments where great ideas are prioritized.
Responsibilities
-Engage with a cross-functional team to design and deliver a successful high-volume laser product
-Deliver laser module architecture specifications and work closely with hardware and software teams to define functionality, interfaces, and documentation
-Engage in problem-solving to develop innovative high-performance laser solutions
-Actively collaborate across disciplines—with electronics, photonics, mechanical, and thermal engineering teams—to design systems, control algorithms, PICs, and PMICs that meet performance, power, and production requirements
-Engage with the engineering, product, sales, and architecture teams to define requirements for subsystems, device characterization, debug, and validation
-Validate architectures, models, and specifications via hands-on testing of dies, prototype assemblies, and packaged modules
-Collaborate with test engineers and technicians to define production test and validation protocols, as well as to analyze and debug test results
-Work with external vendors to ensure timely development and delivery of key processes and components
-Contribute to the roadmap of laser development, improving performance, scalability, power consumption, and cost
-Publish and present novel ideas, and participate in premier photonics conferences
We are hiring a Laser Architect to join our team. In this role, you will develop integrated and highly scalable laser solutions for AI computation. You will work closely with cross-disciplinary engineering teams both internally and across our ecosystem of manufacturing partners to identify requirements, deliver engineering specifications, execute new product designs for laser solutions, and validate system performance on hardware. This role requires a deep understanding of laser physics as well as integrated photonics and the requirements of datacom systems. You will navigate the design space of performance, schedule, reliability, power consumption, and cost, working in tandem with stakeholders across the company to identify optimal and innovative design solutions. Creativity, analytical skills, and clear communication skills are a necessity.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build innovative laser solutions from the ground up and tackle groundbreaking challenges. Work with people who love to build and who thrive in diverse technical environments where great ideas are prioritized.
Responsibilities
-Engage with a cross-functional team to design and deliver a successful high-volume laser product
-Deliver laser module architecture specifications and work closely with hardware and software teams to define functionality, interfaces, and documentation
-Engage in problem-solving to develop innovative high-performance laser solutions
-Actively collaborate across disciplines—with electronics, photonics, mechanical, and thermal engineering teams—to design systems, control algorithms, PICs, and PMICs that meet performance, power, and production requirements
-Engage with the engineering, product, sales, and architecture teams to define requirements for subsystems, device characterization, debug, and validation
-Validate architectures, models, and specifications via hands-on testing of dies, prototype assemblies, and packaged modules
-Collaborate with test engineers and technicians to define production test and validation protocols, as well as to analyze and debug test results
-Work with external vendors to ensure timely development and delivery of key processes and components
-Contribute to the roadmap of laser development, improving performance, scalability, power consumption, and cost
-Publish and present novel ideas, and participate in premier photonics conferences
RequirementsQualifications
-A Ph.D. degree in Photonics, Electrical Engineering, Applied Physics, or similar discipline with at least 3 years of post-degree relevant experience, or equivalent experience
-Minimum 8 years of experience in photonic devices, photonic integrated circuits, and semiconductor physics
-Strong understanding of laser physics and silicon photonics
-Experience closely collaborating with cross-functional teams
-Ability to convey complex technical concepts to both technical and non-technical stakeholders with strong and clear communication
-Demonstrated strong problem solving skills specifically pertaining to problems that do not have obvious solutions
-Willing and able to learn quickly, self-starter with a “no task is too big or small” attitude
Preferred Qualifications
-Experience in lasers for datacom applications
-Experience working in a high volume manufacturing environment
-Demonstrated technical leadership of a cross-functional team
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Similar Presentations
Laser Systems Control Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are hiring a Laser Systems Control Engineer to join our team. In this role, you will be responsible for bringup, configuration, and stabilization of multiwavelength laser systems for datacenter AI communications. You will collaborate closely with a cross-functional team including, photonics, packaging, analog/digital design, and software engineers, to design and implement control algorithms that ensure the laser solution meets stability, power, and performance specifications across all operating conditions.
Join a tight-knit team where each individual’s contributions directly influence the company's and product's success. You'll have the opportunity to build innovative laser solutions from the ground up and tackle groundbreaking challenges. Work with people who love to build and thrive in diverse technical environments where great ideas are prioritized.
Responsibilities
-Design control algorithms for laser power and wavelength stabilization across various thermal environments and product life cycles for Lightmatter’s laser modules.
-Actively coordinate with engineers and architects across various disciplines/teams, such as -Lightmatter’s Passage SOC, photonic systems design, thermal engineering, packaging, analog design, and software engineering, to meet power consumption, performance, and production requirements.
-Architect optical components (integrated and bulk optics) such as filters for multiplexing, demultiplexing purposes, and either design these devices or work with photonics engineers to implement the designs.
-Coordinate with the test and validation team on calibration routines for packaging and assembly.
-Deliver laser module datasheet documentation related to laser power management, supply noise requirements, stability, and control.
-Support design failure mode and error analysis for each laser solution.
-Select and/or codesign with vendors an appropriate PMIC approach to drive and control highly integrated multi-wavelength laser modules in a data center environment.
-Prototype control algorithms in simulation and in a laboratory environment to assess performance against manufacturing variations and external aggressors.
*This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing*
We are hiring a Laser Systems Control Engineer to join our team. In this role, you will be responsible for bringup, configuration, and stabilization of multiwavelength laser systems for datacenter AI communications. You will collaborate closely with a cross-functional team including, photonics, packaging, analog/digital design, and software engineers, to design and implement control algorithms that ensure the laser solution meets stability, power, and performance specifications across all operating conditions.
Join a tight-knit team where each individual’s contributions directly influence the company's and product's success. You'll have the opportunity to build innovative laser solutions from the ground up and tackle groundbreaking challenges. Work with people who love to build and thrive in diverse technical environments where great ideas are prioritized.
Responsibilities
-Design control algorithms for laser power and wavelength stabilization across various thermal environments and product life cycles for Lightmatter’s laser modules.
-Actively coordinate with engineers and architects across various disciplines/teams, such as -Lightmatter’s Passage SOC, photonic systems design, thermal engineering, packaging, analog design, and software engineering, to meet power consumption, performance, and production requirements.
-Architect optical components (integrated and bulk optics) such as filters for multiplexing, demultiplexing purposes, and either design these devices or work with photonics engineers to implement the designs.
-Coordinate with the test and validation team on calibration routines for packaging and assembly.
-Deliver laser module datasheet documentation related to laser power management, supply noise requirements, stability, and control.
-Support design failure mode and error analysis for each laser solution.
-Select and/or codesign with vendors an appropriate PMIC approach to drive and control highly integrated multi-wavelength laser modules in a data center environment.
-Prototype control algorithms in simulation and in a laboratory environment to assess performance against manufacturing variations and external aggressors.
*This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing*
RequirementsQualifications
-M.S. or PhD in Photonics, Electrical Engineering, Software Engineering, or a related field.
-Experience developing laser stabilization and control architectures.
-Experience in conveying complex technical concepts to both technical and non-technical stakeholders, ensuring clear communication and alignment across teams.
Preferred Qualifications
-Hands-on experience testing control algorithms on prototype samples and EVKs.
-Experience in product development for high volume applications.
-Experience in silicon photonics with design for manufacturing and reliability.
-Knowledge of signal processing and digital control.
-Willing and able to learn quickly, with a self-starter mindset and a 'no task is too big or too small' attitude.
-Excellent written and verbal communication skills.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Staff Photonic Hardware Validation Engineer
·
Lightmatter
·
Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
In this role, you will be responsible for advanced platform-level validation of next-generation Lightmatter products. You will be expected to develop product-specific platform validation test plans for products from development to high-volume manufacturing. It also involves engaging with silicon design, digital verification, software, packaging, photonics, and system platform engineering teams to influence the definition of the product and to leverage the capabilities of the ecosystem from a platform validation perspective. You will be developing validation capabilities, tools, and platform-focused tests in Lightmatter. Part of the role is expected to support customer deployment and related technical issues.
Responsibilities
-Own development of overall platform-level post-silicon validation strategies and plans for our highly integrated photonics-based AI platform, our photonics chiplet communication fabric and laser module.
-Define validation plan for platform-level qualification and high-volume manufacturing.
-Partner and influence design teams (hardware, software) during pre-silicon targets to ensure Pre-Si simulation meets specifications and milestone criteria. As needed, work with the design team to include validation hooks, telemetry sensors etc on Si, test vehicles and reference systems for comprehensive post-Si validation and monitoring.
-Setting up photonic validation capabilities, automation, and executing validation in Lightmatter facilities.
In this role, you will be responsible for advanced platform-level validation of next-generation Lightmatter products. You will be expected to develop product-specific platform validation test plans for products from development to high-volume manufacturing. It also involves engaging with silicon design, digital verification, software, packaging, photonics, and system platform engineering teams to influence the definition of the product and to leverage the capabilities of the ecosystem from a platform validation perspective. You will be developing validation capabilities, tools, and platform-focused tests in Lightmatter. Part of the role is expected to support customer deployment and related technical issues.
Responsibilities
-Own development of overall platform-level post-silicon validation strategies and plans for our highly integrated photonics-based AI platform, our photonics chiplet communication fabric and laser module.
-Define validation plan for platform-level qualification and high-volume manufacturing.
-Partner and influence design teams (hardware, software) during pre-silicon targets to ensure Pre-Si simulation meets specifications and milestone criteria. As needed, work with the design team to include validation hooks, telemetry sensors etc on Si, test vehicles and reference systems for comprehensive post-Si validation and monitoring.
-Setting up photonic validation capabilities, automation, and executing validation in Lightmatter facilities.
RequirementsRequired Qualifications
-Bachelor’s degree in Electrical Engineering, Computer Engineering, Physics, Materials Engineering/Science, or other related fields
-8 years of industry experience in photonics and semiconductor products such as CPU, GPU, networking solutions
-At least 5 years of exposure to silicon and photonic system validation
-3-5 years of validation content and script development using Python or C++
-Had hands-on post-silicon chip, system power-on, and photonic validation experience.
-Ability to debug and triage photonic chip and system issues
-Must be able to travel to our Palo Alto Lab frequently
Preferred Qualifications
-A good understanding of advanced packaging (e.g: 2.5D, 3D packaging), photonic design, laser modules
-A deep understanding of photonics, laser design, and architecture
-Experience in laser and photonics architecture and validation
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Similar Presentations
Photonics Architect
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are hiring a Photonics Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will design and develop innovative photonics architecture solutions to deliver high-volume products for our customers. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. You may also test and measure prototypes to validate the models. This role requires a deep understanding of silicon photonics process technologies, state-of-the-art photonics designs, high-speed (40+ GHz) electro-optic interfaces, lasers, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company externally at technical conferences as an example of our technical leadership.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and contributing to high-performance photonic architecture solutions for high-volume products.
-Develop, simulate, and validate control and initialization algorithms for photonic circuits.
-Multiphysics considerations and simulations of photonic circuits.
-Validate the models by designing experiments, conducting proof-of-concept experiments, and analyzing data.
-Actively collaborate across disciplines—with electronics, photonics, and mechanical engineers —to design systems, algorithms, circuits, and devices that meet performance, power, and production requirements.
-Document and present your contributions.
*This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing*
We are hiring a Photonics Architect. The selected candidate will partner with external facing teams at Lightmatter and internal engineering teams to deliver groundbreaking products to the market. In this role, you will design and develop innovative photonics architecture solutions to deliver high-volume products for our customers. You will engage with our cross-disciplinary engineering teams to model and analyze solutions. You may also test and measure prototypes to validate the models. This role requires a deep understanding of silicon photonics process technologies, state-of-the-art photonics designs, high-speed (40+ GHz) electro-optic interfaces, lasers, and 3D integration. The role also requires creativity, analytical skills, and clear communication skills. You may represent the company externally at technical conferences as an example of our technical leadership.
Join a tight-knit team where each individual’s contributions directly influence the success of the company and product. You'll have the opportunity to build a new kind of computer from the ground up and to solve groundbreaking challenges along the way. Work with people who love to build and who thrive in technically diverse environments where great ideas are prioritized.
Responsibilities
-Creative problem-solving and contributing to high-performance photonic architecture solutions for high-volume products.
-Develop, simulate, and validate control and initialization algorithms for photonic circuits.
-Multiphysics considerations and simulations of photonic circuits.
-Validate the models by designing experiments, conducting proof-of-concept experiments, and analyzing data.
-Actively collaborate across disciplines—with electronics, photonics, and mechanical engineers —to design systems, algorithms, circuits, and devices that meet performance, power, and production requirements.
-Document and present your contributions.
*This is not a complete listing of the responsibilities. It’s a representation of the things you will be doing*
RequirementsQualifications
-Ph.D. degree in Electrical Engineering, Applied Physics or similar discipline with at least 3 years of relevant experience, or Master’s degree with at least 6 years of relevant experience.
-Minimum 3 years of experience in modeling silicon photonic circuits.
-Minimum 2 years of experience in controlling silicon photonic circuits.
-Deep understanding of photonics, optical nonlinearities, semiconductors, thermal physics, transmission lines.
-Strong proficiency in photonic circuit simulations, using industry-standard tools such as VPI Systems, Lumerical Interconnect, or equivalent.
-Strong proficiency in coding simulations in Python or Matlab.
-Experience with corner analysis and Monte Carlo analysis.
-Experience collaborating with photonics and electronics teams.
Preferred Qualifications
-Experience in product development or projects with industry.
-Experience in optical communications.
-Experience in silicon photonic integrated circuit design and measurements.
-Experience in design for manufacturing and reliability.
-Experience in microcontrollers and digital control.
-Strong publication record
-Excellent written and verbal communication skills.
-Willingness and ability to learn quickly.
-Self-starter with a “no task is too big or small” attitude.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Sr. Staff Design Verification Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
As a Design Verification Engineer at Lightmatter, you will find yourself at the heart of a dynamic, interdisciplinary team. Your role will involve close collaboration with our digital design experts, using UVM testbench techniques to rigorously verify their designs. Your responsibilities will include working alongside photonic and analog designers, gaining a deep understanding of their innovative designs, and applying Real Number Modeling (RNM) and AMS verification methods.
This critical function ensures the integrity of their work. We are hiring Design Verification Engineers at multiple levels.
Your interaction with the Architecture team will be crucial in comprehending system requirements and spearheading performance verification. This role offers a unique platform to enhance your skills across a spectrum of areas including UVM, AMS modeling, mixed-signal verification, formal verification, emulation, and both performance modeling and verification.
Responsibilities
-Engage collaboratively with teams specializing in digital, photonics, and analog design to develop comprehensive test plans.
-Design and implement UVM testbenches for both subsystem-level and full-chip verification. This includes debugging testbenches, resolving issues, achieving high coverage, and overseeing the final sign-off on Design Verification (DV).
-Develop Real Number Models (RNM) for photonics and analog circuits, conduct AMS verification in conjunction with UVM, and ensure precise model representation. Contribute significantly to the development of the Golden Reference Model (GRM) for design verification.
-Play an integral role in the execution of emulation and formal verification for DV purposes.
As a Design Verification Engineer at Lightmatter, you will find yourself at the heart of a dynamic, interdisciplinary team. Your role will involve close collaboration with our digital design experts, using UVM testbench techniques to rigorously verify their designs. Your responsibilities will include working alongside photonic and analog designers, gaining a deep understanding of their innovative designs, and applying Real Number Modeling (RNM) and AMS verification methods.
This critical function ensures the integrity of their work. We are hiring Design Verification Engineers at multiple levels.
Your interaction with the Architecture team will be crucial in comprehending system requirements and spearheading performance verification. This role offers a unique platform to enhance your skills across a spectrum of areas including UVM, AMS modeling, mixed-signal verification, formal verification, emulation, and both performance modeling and verification.
Responsibilities
-Engage collaboratively with teams specializing in digital, photonics, and analog design to develop comprehensive test plans.
-Design and implement UVM testbenches for both subsystem-level and full-chip verification. This includes debugging testbenches, resolving issues, achieving high coverage, and overseeing the final sign-off on Design Verification (DV).
-Develop Real Number Models (RNM) for photonics and analog circuits, conduct AMS verification in conjunction with UVM, and ensure precise model representation. Contribute significantly to the development of the Golden Reference Model (GRM) for design verification.
-Play an integral role in the execution of emulation and formal verification for DV purposes.
RequirementsQualifications
-Bachelor’s degree in Electrical Engineering, Computer Engineering, a related field, or equivalent experience
-12 years of design verification and SystemVerilog experience
-2+ years of experience in python
-Expertise in developing the UVM library
-Experience with simulators such as Xcelium, ModelSim, Questa, or VCS
Preferred Qualifications
-Master’s degree or higher in Electrical Engineering, Computer Engineering, a related field, or equivalent experience with 8 years of design verification and Systemverilog experience
-Knowledgeable about assertion languages, power verification, reset-domain crossing verification, and AMS verification
-Strong problem solver, communicator, and team player with the ability to work with teams across multiple sites
-Ability to react to change and thrive in a fast-paced (startup) environment
-Communicates complex concepts effectively to diverse stakeholders, fostering support and consensus for initiatives
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Package Layout Design Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
As a Package Layout Design Engineer at Lightmatter, you will design complex substrates and interposers. We hire the best of the best and provide you with the autonomy to plan and execute your designs, leveraging your industry experience to meet timelines and objectives successfully. These products will be used in one of the fastest-growing segments of the semiconductor industry and will help Lightmatter maintain its leading position in silicon photonics technology.
Responsibilities
-Deliver IC package substrate layouts for various advanced packaging technologies.
-Actively participate in the product concept phase to deliver preliminary substrate design and floorplanning to help drive overall stack up (Silicon/package/MB) optimization.
-Work with package substrate suppliers along with Outsourced Semiconductor Assembly and Test (OSAT) companies to understand layout design rules, stack up, and implementation.
-Create fab files, work with substrate suppliers, and OSATs to close design for manufacturing (DFM) / design for assembly (DFA).
-Design complex package layout keeping in mind signal integrity, power integrity, thermal and mechanical requirements.
-Demonstrated ability in designing packages with analog and digital elements/interfaces.
-Support signal/power integrity engineers by providing iterative test routes to refine design rules and routing topologies.
-Project Management skills to coordinate and collaborate across multiple functional teams to ensure error free Package design Tape Out in a timely manner.
-Drive design automation & verification activities.
As a Package Layout Design Engineer at Lightmatter, you will design complex substrates and interposers. We hire the best of the best and provide you with the autonomy to plan and execute your designs, leveraging your industry experience to meet timelines and objectives successfully. These products will be used in one of the fastest-growing segments of the semiconductor industry and will help Lightmatter maintain its leading position in silicon photonics technology.
Responsibilities
-Deliver IC package substrate layouts for various advanced packaging technologies.
-Actively participate in the product concept phase to deliver preliminary substrate design and floorplanning to help drive overall stack up (Silicon/package/MB) optimization.
-Work with package substrate suppliers along with Outsourced Semiconductor Assembly and Test (OSAT) companies to understand layout design rules, stack up, and implementation.
-Create fab files, work with substrate suppliers, and OSATs to close design for manufacturing (DFM) / design for assembly (DFA).
-Design complex package layout keeping in mind signal integrity, power integrity, thermal and mechanical requirements.
-Demonstrated ability in designing packages with analog and digital elements/interfaces.
-Support signal/power integrity engineers by providing iterative test routes to refine design rules and routing topologies.
-Project Management skills to coordinate and collaborate across multiple functional teams to ensure error free Package design Tape Out in a timely manner.
-Drive design automation & verification activities.
RequirementsQualifications
-Bachelor's degree in Electrical Engineering or similar discipline with at least 8 years of relevant experience.
-Minimum 7 years of experience in designing complex layouts.
-Experience with Cadence APD+ as the primary layout tool.
-Good understanding of the IC package substrate and organic materials, and how they will impact package design.
Preferred Qualifications
-Educational background on IC packaging and substrate layout design.
-Familiarity with system level validation tools such as Cadence Integrity 3DIC for package/IC codesign.
-Familiarity with layout tools e.g. Expedition Package Designer, Pads Layout, Altium, etc.
-Experience with various layout tool scripting languages, e.g. SKILL or Python to enable design automation.
-Demonstrated excellent communication skills and ability to collaborate effectively in cross-functional teams.
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Lead Hardware Systems Engineer
·
Lightmatter
·
Boston, MA and Mountain View, CA
Hide Details
SessionJob Postings
DescriptionAbout this Role
We are seeking a Lead HW Systems Engineer with deep expertise in server component design and extensive knowledge of thermo-mechanical and electrical solutions for both air- and liquid-cooled systems. In this role, you’ll be responsible for delivering system solutions using Lightmatter’s innovative technology to develop state-of-the-art solutions for data centers and high-performance computing.
Responsibilities
-Design, test, analyze, integrate, qualify, and document complex hardware systems from start to finish.
-Collaborate with customers, vendors, and stakeholders to define system design requirements.
-Work with the system architects and design engineers to develop optimal solutions for system design requirements.
-Develop concept, intermediate, and final designs, incorporating peer reviews and prototype evaluations.
-Ensure optimal thermal and structural designs through efficient system packaging.
-Lead the selection and integration of components (PCB technology, connectors, cables, regulators, etc.) to ensure optimal electrical design for high-speed interfaces and power-efficient solutions.
-Collaborate cross-functionally to support design integration, power-on, and validation of complex products and subsystems.
-Perform root cause analysis and lead technical investigations for system design issues.
-Contribute to technical roadmaps, establish best practices, and assess product feasibility.
We are seeking a Lead HW Systems Engineer with deep expertise in server component design and extensive knowledge of thermo-mechanical and electrical solutions for both air- and liquid-cooled systems. In this role, you’ll be responsible for delivering system solutions using Lightmatter’s innovative technology to develop state-of-the-art solutions for data centers and high-performance computing.
Responsibilities
-Design, test, analyze, integrate, qualify, and document complex hardware systems from start to finish.
-Collaborate with customers, vendors, and stakeholders to define system design requirements.
-Work with the system architects and design engineers to develop optimal solutions for system design requirements.
-Develop concept, intermediate, and final designs, incorporating peer reviews and prototype evaluations.
-Ensure optimal thermal and structural designs through efficient system packaging.
-Lead the selection and integration of components (PCB technology, connectors, cables, regulators, etc.) to ensure optimal electrical design for high-speed interfaces and power-efficient solutions.
-Collaborate cross-functionally to support design integration, power-on, and validation of complex products and subsystems.
-Perform root cause analysis and lead technical investigations for system design issues.
-Contribute to technical roadmaps, establish best practices, and assess product feasibility.
RequirementsQualifications
-M.S. or Ph.D. in Mechanical Engineering or Electrical Engineering or related field.
-15+ years of experience in systems engineering or hardware development, with a focus on datacenter development.
-Proven track record in hardware bring-up, working collaboratively with sIlicon, package, and systems teams.
-Extensive experience in solving highly complex system challenges related to thermal management, high-speed electrical design, and system packaging.
-Experience in managing projects and leading cross-functional teams, including working with external vendors.
Preferred Qualifications
-Knowledge of integration challenges of optical components at the system level, including electrical-optical link budgets, fiber management, and lasers.
-Strong analytical and problem-solving abilities, adept at addressing and resolving complex technical challenges.
-Excellent collaboration skills in working across diverse teams to ensure seamless project execution and alignment.
-Strong verbal and written communication skills
Company DescriptionLightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
·
·
2024-11-11
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Similar Presentations
Senior Software Engineer — CUDA Python
·
NVIDIA
·
Remote
Hide Details
SessionJob Postings
DescriptionNVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. We're looking to grow our company, and form teams with the most inquisitive people in the world. Join us at the forefront of technological advancement.
We are looking for experienced software professionals to lead/extend our work on bringing delightful developer and user experience to the Python ecosystem. Our goal is to grow NVIDIA’s accelerated Python offerings to a mature product and make Python one of the first-class citizens for programming NVIDIA CUDA GPUs. You will be a crucial member of a team that is working to bring together the power of GPU acceleration and the expressibility and programmability of Python, by developing foundational software that supports many key products spanning the gamut of high performance computing, scientific computing, data analytics, deep learning, and professional graphics running on hardware ranging from gamer laptops to supercomputers to the cloud.
What You'll Be Doing:
As a member of our team, you will use your design abilities, coding expertise, creativity, and community engagement to develop and enhance the functionality and performance of NVIDIA GPUs such that the current and future generations of Python users can enjoy the programmability and take full advantage of the NVIDIA CUDA platform, including both NVIDIA hardware and software. Specifically, you will be working to:
• Architect, prioritize, and develop new features in CUDA Python
• Analyze, identify, and improve the UX and performance of CUDA software in Python
• Write effective, maintainable, and well-tested code for production use
• Bridge the language gap between existing CUDA C/C++ solutions and Python
• Understand and address unique challenges in developing and deploying Python GPU solutions
• Identify key open source players in the Python/PyData ecosystem, and engage with them to develop and drive necessary protocols and standards for the NVIDIA CUDA platform
• Evangelize CUDA programming in Python to encourage and empower adoption of the NVIDIA CUDA platform
What We Need To See:
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Strong Python programming and deployment skills with track record of driving formulation and/or adoption of Python community standards
• Fluent C/C++ and CUDA programming skills
• Background in AI, high performance computing or performance critical applications
• Track record of developing/maintaining Python projects, and/or engaging with Python users on UX or performance improvements
• Experience in designing, developing, tuning, navigating, and/or maintaining a large, complex, multi-language software stack (between C/C++/CUDA and Python)
• Good written communication, collaboration, and presentation skills with ability of operating across team boundaries
• Experience in distributed programming in C/C++/Python using MPI, Dask, Legate, or other distributed programming models/frameworks
• Knowledge of generating Python bindings for mid- to large-size C/C++ codebases
Ways To Stand Out From The Crowd:
• Deep understanding in the CUDA programming model and language features
• Familiarity with Python ecosystem, language idioms, and pioneering solutions
• Dexterity with compilers, static/dynamic analysis techniques, and/or dynamic code generation/transpilation/compilation
• Experience in using or developing the LLVM/Clang compiler infrastructure
• Experience in memory management of a multi-language project or development of domain specific libraries/languages for AI, Data Analytics or Scientific Computing
With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you!
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are looking for experienced software professionals to lead/extend our work on bringing delightful developer and user experience to the Python ecosystem. Our goal is to grow NVIDIA’s accelerated Python offerings to a mature product and make Python one of the first-class citizens for programming NVIDIA CUDA GPUs. You will be a crucial member of a team that is working to bring together the power of GPU acceleration and the expressibility and programmability of Python, by developing foundational software that supports many key products spanning the gamut of high performance computing, scientific computing, data analytics, deep learning, and professional graphics running on hardware ranging from gamer laptops to supercomputers to the cloud.
What You'll Be Doing:
As a member of our team, you will use your design abilities, coding expertise, creativity, and community engagement to develop and enhance the functionality and performance of NVIDIA GPUs such that the current and future generations of Python users can enjoy the programmability and take full advantage of the NVIDIA CUDA platform, including both NVIDIA hardware and software. Specifically, you will be working to:
• Architect, prioritize, and develop new features in CUDA Python
• Analyze, identify, and improve the UX and performance of CUDA software in Python
• Write effective, maintainable, and well-tested code for production use
• Bridge the language gap between existing CUDA C/C++ solutions and Python
• Understand and address unique challenges in developing and deploying Python GPU solutions
• Identify key open source players in the Python/PyData ecosystem, and engage with them to develop and drive necessary protocols and standards for the NVIDIA CUDA platform
• Evangelize CUDA programming in Python to encourage and empower adoption of the NVIDIA CUDA platform
What We Need To See:
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Strong Python programming and deployment skills with track record of driving formulation and/or adoption of Python community standards
• Fluent C/C++ and CUDA programming skills
• Background in AI, high performance computing or performance critical applications
• Track record of developing/maintaining Python projects, and/or engaging with Python users on UX or performance improvements
• Experience in designing, developing, tuning, navigating, and/or maintaining a large, complex, multi-language software stack (between C/C++/CUDA and Python)
• Good written communication, collaboration, and presentation skills with ability of operating across team boundaries
• Experience in distributed programming in C/C++/Python using MPI, Dask, Legate, or other distributed programming models/frameworks
• Knowledge of generating Python bindings for mid- to large-size C/C++ codebases
Ways To Stand Out From The Crowd:
• Deep understanding in the CUDA programming model and language features
• Familiarity with Python ecosystem, language idioms, and pioneering solutions
• Dexterity with compilers, static/dynamic analysis techniques, and/or dynamic code generation/transpilation/compilation
• Experience in using or developing the LLVM/Clang compiler infrastructure
• Experience in memory management of a multi-language project or development of domain specific libraries/languages for AI, Data Analytics or Scientific Computing
With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you!
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
NVIDIA
In-person
Remote
Full Time
Deep Learning Engineer — Distributed Task-Based Backends
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionWe are looking for Senior to Principal level experienced software professionals to help build the next generation of distributed backends for premier Deep Learning frameworks like PyTorch, JAX and TensorFlow. You will build on top of validated task-based runtime systems like Legate, Legion & Realm to develop a platform that can scale a wide range of model architectures to thousands of GPUs!
What You Will Be Doing:
• Develop extensions to popular Deep Learning frameworks, that enable easy experimentation with various parallelization strategies!
• Develop compiler optimizations and parallelization heuristics to improve the performance of AI models at extreme scales
• Develop tools that enable performance debugging of AI models at large scales
• Study and tune Deep Learning training workloads at large scale, including important enterprise and academic models
• Support enterprise customers and partners to scale novel models using our platform
• Collaborate with Deep Learning software and hardware teams across NVIDIA, to drive development of future Deep Learning libraries
• Contribute to the development of runtime systems that underlay the foundation of all distributed GPU computing at NVIDIA
What We Need To See:
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Proficient with Python and C++ programming
• Strong background with parallel and distributed programming, preferably on GPUs
• Hands-on development skills using Machine Learning frameworks (e.g. PyTorch, TensorFlow, Jax, MXNet, scikit-learn, etc.)
• Understanding of Deep Learning training in distributed contexts (multi-GPU, multi-node)
Ways To Stand Out From The Crowd:
• Experience with deep-learning compiler stacks such as XLA, MLIR, Torch Dynamo
• Background in performance analysis, profiling and tuning of HPC/AI workloads
• Experience with CUDA programming and GPU performance optimization
• Background with tasking or asynchronous runtimes, especially data-centric initiatives such as Legion
• Experience building, debugging, profiling and optimizing multi-node applications, on supercomputers or the cloud
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You Will Be Doing:
• Develop extensions to popular Deep Learning frameworks, that enable easy experimentation with various parallelization strategies!
• Develop compiler optimizations and parallelization heuristics to improve the performance of AI models at extreme scales
• Develop tools that enable performance debugging of AI models at large scales
• Study and tune Deep Learning training workloads at large scale, including important enterprise and academic models
• Support enterprise customers and partners to scale novel models using our platform
• Collaborate with Deep Learning software and hardware teams across NVIDIA, to drive development of future Deep Learning libraries
• Contribute to the development of runtime systems that underlay the foundation of all distributed GPU computing at NVIDIA
What We Need To See:
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Proficient with Python and C++ programming
• Strong background with parallel and distributed programming, preferably on GPUs
• Hands-on development skills using Machine Learning frameworks (e.g. PyTorch, TensorFlow, Jax, MXNet, scikit-learn, etc.)
• Understanding of Deep Learning training in distributed contexts (multi-GPU, multi-node)
Ways To Stand Out From The Crowd:
• Experience with deep-learning compiler stacks such as XLA, MLIR, Torch Dynamo
• Background in performance analysis, profiling and tuning of HPC/AI workloads
• Experience with CUDA programming and GPU performance optimization
• Background with tasking or asynchronous runtimes, especially data-centric initiatives such as Legion
• Experience building, debugging, profiling and optimizing multi-node applications, on supercomputers or the cloud
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Senior Software Engineer, Distributed Task-Based Runtimes
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. We're looking to grow our company, and form teams with the most inquisitive people in the world. Join us at the forefront of technological advancement.
We are looking for an experienced software professional to lead/extend our work on the Legion and Realm runtimes for large-scale distributed GPU computing, unlocking the power of distributed GPU computing by developing foundational software that supports many key products spanning the gamut of data analytics, deep learning, HPC and professional graphics running on hardware ranging from supercomputers to the cloud.
What You'll Be Doing
As a member of our team, you will use your design abilities, coding expertise, and creativity to develop and improve the functionality and performance of runtime systems that underlay the foundation of distributed GPU computing at NVIDIA.
• Architect, prioritize, and develop new features
• Analyze and improve the performance for key applications
• Write effective, maintainable, and well-tested code for production use
What We Need To See
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Strong C/C++ and CUDA programming skills
• Background in high performance computing and performance critical applications
• Experience implementing, tuning, and debugging runtimes and/or distributed systems for supercomputers or the cloud
• Good written communication, teamwork, and presentation skills
Ways To Stand Out from the Crowd
• Background with tasking or asynchronous runtimes, especially data-centric initiatives such as Legion
• Dexterity with compilers and static/dynamic alias analysis techniques
• Experience using Python for HPC
• Knowledge of high performance spatial data structures and algorithms used in accelerating point and ray queries
• Development of domain specific libraries/languages for high performance computing
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are looking for an experienced software professional to lead/extend our work on the Legion and Realm runtimes for large-scale distributed GPU computing, unlocking the power of distributed GPU computing by developing foundational software that supports many key products spanning the gamut of data analytics, deep learning, HPC and professional graphics running on hardware ranging from supercomputers to the cloud.
What You'll Be Doing
As a member of our team, you will use your design abilities, coding expertise, and creativity to develop and improve the functionality and performance of runtime systems that underlay the foundation of distributed GPU computing at NVIDIA.
• Architect, prioritize, and develop new features
• Analyze and improve the performance for key applications
• Write effective, maintainable, and well-tested code for production use
What We Need To See
• BS, MS or PhD degree in Computer Science, Electrical Engineering or related field (or equivalent experience)
• 5+ years of relevant industry experience or equivalent academic experience after BS
• Strong C/C++ and CUDA programming skills
• Background in high performance computing and performance critical applications
• Experience implementing, tuning, and debugging runtimes and/or distributed systems for supercomputers or the cloud
• Good written communication, teamwork, and presentation skills
Ways To Stand Out from the Crowd
• Background with tasking or asynchronous runtimes, especially data-centric initiatives such as Legion
• Dexterity with compilers and static/dynamic alias analysis techniques
• Experience using Python for HPC
• Knowledge of high performance spatial data structures and algorithms used in accelerating point and ray queries
• Development of domain specific libraries/languages for high performance computing
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Senior Software Engineer, GPU Communications and Networking
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars. NVIDIA is looking for phenomenal people like you to help us accelerate the next wave of artificial intelligence.
We are looking for a highly motivated senior software engineer for an exciting role in our communication libraries and network software team. The position will be part of a fast-paced crew that develops and maintains software for complex heterogeneous computing systems that power disruptive products in High Performance Computing and Deep Learning.
What You Will Be Doing
• Design, implement and maintain highly-optimized communication runtimes for Deep Learning frameworks (e.g. NCCL for TensorFlow/Pytorch) and HPC programming interfaces (e.g. UCX for MPI/OpenSHMEM) on GPU clusters.
• Participate in and contribute to parallel programming interface specifications like MPI/OpenSHMEM.
• Design, implement and maintain system software that enables interactions among GPUs and interactions between GPUs and other system components.
• Create proof-of-concepts to evaluate and motivate extensions in programming models, new designs in runtimes and new features in hardware.
What We Need To See
• MS/PhD degree in CS/CE or equivalent experience
• 5+ years of relevant experience
• Excellent C/C++ programming and debugging skills
• Strong experience with Linux
• Expert understanding of computer system architecture and operating systems
• Experience with parallel programming interfaces and communication runtimes
• Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment
Ways To Stand Out from the Crowd
• Deep understanding of technology and passion for what you do
• Experience with CUDA programming and NVIDIA GPUs
• Knowledge of high-performance networks like InfiniBand, iWARP, etc.
• Experience with HPC applications
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
• Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic matrix environment
NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most forward-thinking and talented people in the world working for us and, due to unprecedented growth, our world-class engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.
The base salary range is 148,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are looking for a highly motivated senior software engineer for an exciting role in our communication libraries and network software team. The position will be part of a fast-paced crew that develops and maintains software for complex heterogeneous computing systems that power disruptive products in High Performance Computing and Deep Learning.
What You Will Be Doing
• Design, implement and maintain highly-optimized communication runtimes for Deep Learning frameworks (e.g. NCCL for TensorFlow/Pytorch) and HPC programming interfaces (e.g. UCX for MPI/OpenSHMEM) on GPU clusters.
• Participate in and contribute to parallel programming interface specifications like MPI/OpenSHMEM.
• Design, implement and maintain system software that enables interactions among GPUs and interactions between GPUs and other system components.
• Create proof-of-concepts to evaluate and motivate extensions in programming models, new designs in runtimes and new features in hardware.
What We Need To See
• MS/PhD degree in CS/CE or equivalent experience
• 5+ years of relevant experience
• Excellent C/C++ programming and debugging skills
• Strong experience with Linux
• Expert understanding of computer system architecture and operating systems
• Experience with parallel programming interfaces and communication runtimes
• Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment
Ways To Stand Out from the Crowd
• Deep understanding of technology and passion for what you do
• Experience with CUDA programming and NVIDIA GPUs
• Knowledge of high-performance networks like InfiniBand, iWARP, etc.
• Experience with HPC applications
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
• Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic matrix environment
NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most forward-thinking and talented people in the world working for us and, due to unprecedented growth, our world-class engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.
The base salary range is 148,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Senior HPC Performance Engineer
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a motivated Performance Engineer to influence the roadmap of our communication libraries. The DL and HPC applications of today have a huge compute demand and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity for someone with an HPC and performance background to advance the state of the art in this space. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.
• Study the interaction of our libraries with all HW (GPU, CPU, Networking) and SW components in the stack.
• Evaluate proof-of-concepts, conduct trade-off analysis when multiple solutions are available.
• Triage and root-cause performance issues reported by our customers.
• Collect a lot of performance data; build tools and infrastructure to visualize and analyze the information.
• Collaborate with a very dynamic team across multiple time zones.
What We Need To See
• MS (or equivalent experience) or PhD in Computer Science, or related field with relevant performance engineering and HPC experience
• 3+ years of experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
• Experience conducting performance benchmarking and triage on large scale HPC clusters
• Good understanding of computer system architecture, HW-SW interactions and operating systems principles (i.e., systems software fundamentals)
• Ability to implement micro-benchmarks in C/C++, read and modify the code base when required
• Ability to debug performance issues across the entire HW/SW stack
• Proficiency in a scripting language, preferably Python
• Familiarity with containers, cloud provisioning and scheduling tools (Kubernetes, SLURM, Ansible, Docker)
• Adaptability and passion to learn new areas and tools
• Flexibility to work and communicate effectively across different teams and time zones
Ways To Stand Out from the Crowd
• Practical experience with Infiniband/Ethernet networks in areas like RDMA, topologies, congestion control
• Experience debugging network issues in large scale deployments
• Familiarity with CUDA programming and/or GPUs
• Experience with Deep Learning Frameworks such as PyTorch, TensorFlow
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a motivated Performance Engineer to influence the roadmap of our communication libraries. The DL and HPC applications of today have a huge compute demand and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity for someone with an HPC and performance background to advance the state of the art in this space. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.
• Study the interaction of our libraries with all HW (GPU, CPU, Networking) and SW components in the stack.
• Evaluate proof-of-concepts, conduct trade-off analysis when multiple solutions are available.
• Triage and root-cause performance issues reported by our customers.
• Collect a lot of performance data; build tools and infrastructure to visualize and analyze the information.
• Collaborate with a very dynamic team across multiple time zones.
What We Need To See
• MS (or equivalent experience) or PhD in Computer Science, or related field with relevant performance engineering and HPC experience
• 3+ years of experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
• Experience conducting performance benchmarking and triage on large scale HPC clusters
• Good understanding of computer system architecture, HW-SW interactions and operating systems principles (i.e., systems software fundamentals)
• Ability to implement micro-benchmarks in C/C++, read and modify the code base when required
• Ability to debug performance issues across the entire HW/SW stack
• Proficiency in a scripting language, preferably Python
• Familiarity with containers, cloud provisioning and scheduling tools (Kubernetes, SLURM, Ansible, Docker)
• Adaptability and passion to learn new areas and tools
• Flexibility to work and communicate effectively across different teams and time zones
Ways To Stand Out from the Crowd
• Practical experience with Infiniband/Ethernet networks in areas like RDMA, topologies, congestion control
• Experience debugging network issues in large scale deployments
• Familiarity with CUDA programming and/or GPUs
• Experience with Deep Learning Frameworks such as PyTorch, TensorFlow
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Software Engineering Manager — GPU Communications Libraries
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionWe are the GPU Communications Libraries and Networking team at NVIDIA. We deliver communication libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. DL and HPC applications have a huge compute demand already and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. Infiniband, Ethernet) across the nodes.
Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! We are looking for a technical leader to manage our NVSHMEM and UCX libraries. This is an outstanding opportunity to push the limits on the state of the art and deliver platforms the world has never seen before. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Lead, mentor, and grow your library engineering team and be responsible for the planning and execution of projects as well as the quality and performance of your libraries.
• This is a technical leadership role, so you will participate in feature design and implementation.
• Interact with internal and external partners and researchers to understand their use cases and requirements.
• Collaborate with engineering teams, program and product management, and partners to define the product roadmap.
• Continuously review and identify improvement opportunities in established processes, infrastructure, and practices to ensure the teams are executing in the most efficient and transparent manner.
What We Need To See
• 10+ overall years of experience in the software industry with specialization in HPC networking or system software.
• 4+ years of management experience.
• BS, MS, or PhD in CS, CE, EE (related technical field) or equivalent experience.
• Prior systems software or communication runtime or high performance networking software development experience with a successful track record of taking several complex software features or products through the full product life cycle.
• Strong understanding of computer system architecture, operating systems principles (i.e., systems software fundamentals), HW-SW interactions and performance analysis/optimizations.
• Excellent C/C++ programming and debugging skills in Linux.
• Experience balancing multiple projects with competing priorities.
• Flexibility to work and communicate effectively across different teams and timezones.
Ways To Stand Out from the Crowd
• Experience with parallel programming models (MPI, SHMEM) and at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC).
• Experience with programming using CUDA, MPI, OpenMP, OpenACC, pthreads.
• Background with RDMA, high-performance networking technologies (InfiniBand, RoCE, Ethernet, EFA), network architecture and network topologies.
• Knowledge of HPC and ML/DL fundamentals.
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology — and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! We are looking for a technical leader to manage our NVSHMEM and UCX libraries. This is an outstanding opportunity to push the limits on the state of the art and deliver platforms the world has never seen before. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Lead, mentor, and grow your library engineering team and be responsible for the planning and execution of projects as well as the quality and performance of your libraries.
• This is a technical leadership role, so you will participate in feature design and implementation.
• Interact with internal and external partners and researchers to understand their use cases and requirements.
• Collaborate with engineering teams, program and product management, and partners to define the product roadmap.
• Continuously review and identify improvement opportunities in established processes, infrastructure, and practices to ensure the teams are executing in the most efficient and transparent manner.
What We Need To See
• 10+ overall years of experience in the software industry with specialization in HPC networking or system software.
• 4+ years of management experience.
• BS, MS, or PhD in CS, CE, EE (related technical field) or equivalent experience.
• Prior systems software or communication runtime or high performance networking software development experience with a successful track record of taking several complex software features or products through the full product life cycle.
• Strong understanding of computer system architecture, operating systems principles (i.e., systems software fundamentals), HW-SW interactions and performance analysis/optimizations.
• Excellent C/C++ programming and debugging skills in Linux.
• Experience balancing multiple projects with competing priorities.
• Flexibility to work and communicate effectively across different teams and timezones.
Ways To Stand Out from the Crowd
• Experience with parallel programming models (MPI, SHMEM) and at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC).
• Experience with programming using CUDA, MPI, OpenMP, OpenACC, pthreads.
• Background with RDMA, high-performance networking technologies (InfiniBand, RoCE, Ethernet, EFA), network architecture and network topologies.
• Knowledge of HPC and ML/DL fundamentals.
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology — and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Senior Software Architect — Deep Learning and HPC Communications
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is leading groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU — our invention — serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables groundbreaking creativity and discovery, and powers inventions that were once considered science fiction, from artificial intelligence to autonomous cars.
What We Are Seeking
We are the GPU Communications Libraries and Networking team at NVIDIA. We build communication libraries like NCCL, NVSHMEM, and UCX that are crucial for scaling Deep Learning and HPC. We're seeking a Senior Software Architect to help co-design next-gen data center platforms and scalable communications software.
DL and HPC applications have huge compute demands and already run at scales of up to tens of thousands of GPUs. GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. InfiniBand, Ethernet) across nodes. Efficient and fast communication between GPUs directly impacts end-to-end application performance. This impact continues to grow with the increasing scale of next-generation systems. This is an outstanding opportunity to advance the state of the art, break performance barriers, and deliver platforms the world has never seen before. Are you ready to build the new and innovative technologies that will help realize NVIDIA's vision?
What You Will Be Doing
• Investigate opportunities to improve communication performance by identifying bottlenecks in today's systems.
• Design and implement new communication technologies to accelerate AI and HPC workloads.
• Explore innovative solutions in HW and SW for our next-generation platforms as part of co-design efforts involving GPU, Networking, and SW architects.
• Build proofs of concept, conduct experiments, and perform quantitative modeling to evaluate and drive new innovations.
• Use simulation to explore performance of large GPU clusters (think scales of 100s of 1,000s of GPUs).
What We Need To See
• MS/PhD degree in CS/CE or equivalent experience.
• 5+ years of relevant experience.
• Excellent C/C++ programming and debugging skills.
• Experience with parallel programming models (MPI, SHMEM) and at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC).
• Deep understanding of operating systems, computer and system architecture.
• Solid in fundamentals of network architecture, topology, algorithms, and communication scaling relevant to AI and HPC workloads.
• Strong experience with Linux.
• Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment.
Ways To Stand Out from the Crowd
• Expertise in related technology and passion for what you do.
• Experience with CUDA programming and NVIDIA GPUs.
• Knowledge of high-performance networks like InfiniBand, RoCE, NVLink, etc.
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
• Knowledge of deep learning parallelisms and mapping to the communication subsystem.
• Experience with HPC applications.
• Strong collaborative and interpersonal skills and a proven track record of effectively guiding and influencing within a dynamic and multi-functional environment.
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What We Are Seeking
We are the GPU Communications Libraries and Networking team at NVIDIA. We build communication libraries like NCCL, NVSHMEM, and UCX that are crucial for scaling Deep Learning and HPC. We're seeking a Senior Software Architect to help co-design next-gen data center platforms and scalable communications software.
DL and HPC applications have huge compute demands and already run at scales of up to tens of thousands of GPUs. GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. InfiniBand, Ethernet) across nodes. Efficient and fast communication between GPUs directly impacts end-to-end application performance. This impact continues to grow with the increasing scale of next-generation systems. This is an outstanding opportunity to advance the state of the art, break performance barriers, and deliver platforms the world has never seen before. Are you ready to build the new and innovative technologies that will help realize NVIDIA's vision?
What You Will Be Doing
• Investigate opportunities to improve communication performance by identifying bottlenecks in today's systems.
• Design and implement new communication technologies to accelerate AI and HPC workloads.
• Explore innovative solutions in HW and SW for our next-generation platforms as part of co-design efforts involving GPU, Networking, and SW architects.
• Build proofs of concept, conduct experiments, and perform quantitative modeling to evaluate and drive new innovations.
• Use simulation to explore performance of large GPU clusters (think scales of 100s of 1,000s of GPUs).
What We Need To See
• MS/PhD degree in CS/CE or equivalent experience.
• 5+ years of relevant experience.
• Excellent C/C++ programming and debugging skills.
• Experience with parallel programming models (MPI, SHMEM) and at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC).
• Deep understanding of operating systems, computer and system architecture.
• Solid in fundamentals of network architecture, topology, algorithms, and communication scaling relevant to AI and HPC workloads.
• Strong experience with Linux.
• Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment.
Ways To Stand Out from the Crowd
• Expertise in related technology and passion for what you do.
• Experience with CUDA programming and NVIDIA GPUs.
• Knowledge of high-performance networks like InfiniBand, RoCE, NVLink, etc.
• Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.
• Knowledge of deep learning parallelisms and mapping to the communication subsystem.
• Experience with HPC applications.
• Strong collaborative and interpersonal skills and a proven track record of effectively guiding and influencing within a dynamic and multi-functional environment.
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Distinguished Software Architect — Deep Learning and HPC Communications
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver communication libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a Distinguished Software Architect to help co-design our next-generation data center platforms. DL and HPC applications have a huge compute demand already and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity to push the limits of the state of the art and deliver platforms the world has never seen before. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Research new communication technologies (e.g. expand the GPUDirect technology portfolio) and design new features for our communication libraries.
• Propose innovative solutions in HW and SW for our next-gen platforms. You will co-design these solutions with the GPU, Networking, and SW architects and ensure seamless integration with the software stacks.
• Inspire changes based on quantitative data coming from proof-of-concepts or detailed technical analysis/modeling.
• Drive the adoption of new communication technologies across application verticals.
• Keep up with the latest DL research and collaborate with diverse teams (internal and external), including DL researchers, and customers.
What We Need To See
• PhD in Computer Science, Computer Engineering or related field or strong equivalent experience
• 15+ years of relevant experience in academia or the industry
• Expertise in the following areas: HPC, parallel programming models (MPI, SHMEM), at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC), computer and system architecture, GPU architecture and CUDA
• Deep understanding of various aspects of high performance networking from prior work experience: network technologies (Infiniband, Ethernet), network design, network topologies, network debug and performance analysis
• Strength in at least a few of these areas: ML/DL fundamentals and how they tie to communications, parallel algorithms, fault tolerance and resiliency, competitive assessments, performance analysis and optimizations for parallel applications on large clusters, developing applications using DL Frameworks (PyTorch, TensorFlow)
• Programming fluency with C or C++ for systems software development
• Flexibility to work and communicate effectively across different HW/SW teams and time zones
Ways To Stand Out from the Crowd
• Industry-recognized leadership in HPC/DL communications with history of patents, publications and conference talks and keynotes in areas relevant to this role
• Influential role in industry standards (e.g. MPI, OpenSHMEM) and open source software (e.g. PyTorch, UCX, Open MPI)
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 308,000 USD – 471,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver communication libraries like NCCL, NVSHMEM, UCX for Deep Learning and HPC. We are looking for a Distinguished Software Architect to help co-design our next-generation data center platforms. DL and HPC applications have a huge compute demand already and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity to push the limits of the state of the art and deliver platforms the world has never seen before. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Research new communication technologies (e.g. expand the GPUDirect technology portfolio) and design new features for our communication libraries.
• Propose innovative solutions in HW and SW for our next-gen platforms. You will co-design these solutions with the GPU, Networking, and SW architects and ensure seamless integration with the software stacks.
• Inspire changes based on quantitative data coming from proof-of-concepts or detailed technical analysis/modeling.
• Drive the adoption of new communication technologies across application verticals.
• Keep up with the latest DL research and collaborate with diverse teams (internal and external), including DL researchers, and customers.
What We Need To See
• PhD in Computer Science, Computer Engineering or related field or strong equivalent experience
• 15+ years of relevant experience in academia or the industry
• Expertise in the following areas: HPC, parallel programming models (MPI, SHMEM), at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC), computer and system architecture, GPU architecture and CUDA
• Deep understanding of various aspects of high performance networking from prior work experience: network technologies (Infiniband, Ethernet), network design, network topologies, network debug and performance analysis
• Strength in at least a few of these areas: ML/DL fundamentals and how they tie to communications, parallel algorithms, fault tolerance and resiliency, competitive assessments, performance analysis and optimizations for parallel applications on large clusters, developing applications using DL Frameworks (PyTorch, TensorFlow)
• Programming fluency with C or C++ for systems software development
• Flexibility to work and communicate effectively across different HW/SW teams and time zones
Ways To Stand Out from the Crowd
• Industry-recognized leadership in HPC/DL communications with history of patents, publications and conference talks and keynotes in areas relevant to this role
• Influential role in industry standards (e.g. MPI, OpenSHMEM) and open source software (e.g. PyTorch, UCX, Open MPI)
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 308,000 USD – 471,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Senior System Software Engineer, NCCL — Partner Enablement
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver communication runtimes like NCCL and NVSHMEM for Deep Learning and HPC applications. We are looking for a motivated Partner Enablement Engineer to guide our key partners and customers with NCCL. Most DL/HPC applications run on large clusters with high-speed networking (Infiniband, RoCE, Ethernet). This is an outstanding opportunity to get an end-to-end understanding of the AI networking stack. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Engage with our partners and customers to root cause functional and performance issues reported with NCCL
• Conduct performance characterization and analysis of NCCL and DL applications on groundbreaking GPU clusters
• Develop tools and automation to isolate issues on new systems and platforms, including cloud platforms (Azure, AWS, GCP, etc.)
• Guide our customers and support teams on HPC knowledge and standard methodologies for running applications on multi-node clusters
• Document and conduct trainings/webinars for NCCL
• Engage with internal teams in different time zones on networking, GPUs, storage, infrastructure and support
What We Need To See
• BS/MS degree in CS/CE or equivalent experience with 5+ years of relevant experience
• Experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
• Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
• Experience working with engineering or academic research community supporting HPC or AI
• Practical experience with high performance networking: Infiniband/RoCE/Ethernet networks, RDMA, topologies, congestion control
• Expertise in Linux fundamentals and a scripting language, preferably Python
• Familiarity with containers, cloud provisioning and scheduling tools (Docker, Docker Swarm, Kubernetes, SLURM, Ansible)
• Adaptability and passion to learn new areas and tools
• Flexibility to work and communicate effectively across different teams and time zones
Ways To Stand Out from the Crowd
• Experience conducting performance benchmarking and developing infrastructure on HPC clusters
• Prior system administration experience, especially for large clusters
• Experience debugging network configuration issues in large-scale deployments
• Familiarity with CUDA programming and/or GPUs
• Good understanding of Machine Learning concepts and experience with Deep Learning Frameworks such PyTorch, TensorFlow
• Deep understanding of technology and passion for what you do
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We are the GPU Communications Libraries and Networking team at NVIDIA. We deliver communication runtimes like NCCL and NVSHMEM for Deep Learning and HPC applications. We are looking for a motivated Partner Enablement Engineer to guide our key partners and customers with NCCL. Most DL/HPC applications run on large clusters with high-speed networking (Infiniband, RoCE, Ethernet). This is an outstanding opportunity to get an end-to-end understanding of the AI networking stack. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
• Engage with our partners and customers to root cause functional and performance issues reported with NCCL
• Conduct performance characterization and analysis of NCCL and DL applications on groundbreaking GPU clusters
• Develop tools and automation to isolate issues on new systems and platforms, including cloud platforms (Azure, AWS, GCP, etc.)
• Guide our customers and support teams on HPC knowledge and standard methodologies for running applications on multi-node clusters
• Document and conduct trainings/webinars for NCCL
• Engage with internal teams in different time zones on networking, GPUs, storage, infrastructure and support
What We Need To See
• BS/MS degree in CS/CE or equivalent experience with 5+ years of relevant experience
• Experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
• Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
• Experience working with engineering or academic research community supporting HPC or AI
• Practical experience with high performance networking: Infiniband/RoCE/Ethernet networks, RDMA, topologies, congestion control
• Expertise in Linux fundamentals and a scripting language, preferably Python
• Familiarity with containers, cloud provisioning and scheduling tools (Docker, Docker Swarm, Kubernetes, SLURM, Ansible)
• Adaptability and passion to learn new areas and tools
• Flexibility to work and communicate effectively across different teams and time zones
Ways To Stand Out from the Crowd
• Experience conducting performance benchmarking and developing infrastructure on HPC clusters
• Prior system administration experience, especially for large clusters
• Experience debugging network configuration issues in large-scale deployments
• Familiarity with CUDA programming and/or GPUs
• Good understanding of Machine Learning concepts and experience with Deep Learning Frameworks such PyTorch, TensorFlow
• Deep understanding of technology and passion for what you do
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Senior Deep Learning Software Engineer, PyTorch
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionWe are now looking for a Senior Deep Learning Software Engineer, PyTorch. NVIDIA is hiring software engineers to design and build tools used by AI engineers across the world to design, develop, and deploy AI applications scalable across thousands of GPUs. This position will embed you in an ambitious and diverse team that influences all areas of NVIDIA's AI platform as well as directly contributes to PyTorch, a premiere deep learning framework. In this role you will work with multiple teams at NVIDIA across fields, as well as collaborate internationally with the PyTorch community to develop the best AI platform in the world.
What You Will Be Doing
• Design and build PyTorch components and tools that run efficiently on Super Computers with 1,000s of GPUs.
• Collaborate with NVIDIA’s hardware and software teams to improve the network and GPU efficiency in PyTorch.
• Design, build and support production AI solutions used by enterprise customers and partners.
• Work with internal applied researchers to improve their AI tools.
What We Need To See
• MS in Computer Science or Engineering (or equivalent experience) with 5+ years of professional experience in High Performance Computing
• Proficiency with Python C++ programming
• Proven experience with Thread and Distributed Parallel Programming
• Demonstrated experience developing large software projects
• Strong verbal and written communication skills
Ways To Stand Out from the Crowd
• Familiarity with Machine Learning
• Experience with CUDA Programming and RDMA networking
• Experience with Python
• Participation in the open-source community
• Demonstrated experience working with multi-disciplinary teams
With competitive salaries and a generous benefits package (www.nvidiabenefits.com), we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our best-in-class engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you!
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You Will Be Doing
• Design and build PyTorch components and tools that run efficiently on Super Computers with 1,000s of GPUs.
• Collaborate with NVIDIA’s hardware and software teams to improve the network and GPU efficiency in PyTorch.
• Design, build and support production AI solutions used by enterprise customers and partners.
• Work with internal applied researchers to improve their AI tools.
What We Need To See
• MS in Computer Science or Engineering (or equivalent experience) with 5+ years of professional experience in High Performance Computing
• Proficiency with Python C++ programming
• Proven experience with Thread and Distributed Parallel Programming
• Demonstrated experience developing large software projects
• Strong verbal and written communication skills
Ways To Stand Out from the Crowd
• Familiarity with Machine Learning
• Experience with CUDA Programming and RDMA networking
• Experience with Python
• Participation in the open-source community
• Demonstrated experience working with multi-disciplinary teams
With competitive salaries and a generous benefits package (www.nvidiabenefits.com), we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our best-in-class engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you!
The base salary range is 180,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Senior Solutions Architect, NIC and DPU — NVIS
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is the world leader in computer graphics, artificial intelligence, and accelerated computing. For over 25 years, we have been at the forefront of research and engineering around the greatest advances in technology. Our history of innovation drives us to solve the world's hardest problems.
NVIDIA is looking for a Senior NIC/DPU Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Support GPU, NIC, and networking applications on the converged GPU/DPU/NIC and x86 platforms.
• Work on customer production activities, introducing and integrating NVIDIA networking products to new and existing customers.
• Gain customers’ trust and understand their needs.
• Work closely with and support cross-functional teams, optimize customer environment, and maintain resiliency.
• Help with customer production requirements alongside engineering and product teams.
• Address sophisticated and obvious customer issues.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• 8+ years of experience with configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance including medium- to large-scale HPC/AI network environments.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, solving, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Driven focus on customer needs and satisfaction.
• Self-motivated with excellent leadership skills, including working with customers.
• Strong written, verbal, and listening skills in English are critical.
Ways To Stand Out from the Crowd
• Familiarity with the InfiniBand protocol and RDMA concepts.
• Experience with GPUs, CUDA, GPUDirect or NVIDIA's BlueField Data Processing Unit (DPU).
• Experience with high-performance computing architectures.
• Understanding of how job schedulers (Slurm, PBS) work.
• Coding development experience with multiple programming languages (from low-level C programming language to high-level languages such as Python/Bash).
• Cluster management technologies knowledge and bonus credit for BCM (Base Command Manager).
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for a Senior NIC/DPU Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Support GPU, NIC, and networking applications on the converged GPU/DPU/NIC and x86 platforms.
• Work on customer production activities, introducing and integrating NVIDIA networking products to new and existing customers.
• Gain customers’ trust and understand their needs.
• Work closely with and support cross-functional teams, optimize customer environment, and maintain resiliency.
• Help with customer production requirements alongside engineering and product teams.
• Address sophisticated and obvious customer issues.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• 8+ years of experience with configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance including medium- to large-scale HPC/AI network environments.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, solving, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Driven focus on customer needs and satisfaction.
• Self-motivated with excellent leadership skills, including working with customers.
• Strong written, verbal, and listening skills in English are critical.
Ways To Stand Out from the Crowd
• Familiarity with the InfiniBand protocol and RDMA concepts.
• Experience with GPUs, CUDA, GPUDirect or NVIDIA's BlueField Data Processing Unit (DPU).
• Experience with high-performance computing architectures.
• Understanding of how job schedulers (Slurm, PBS) work.
• Coding development experience with multiple programming languages (from low-level C programming language to high-level languages such as Python/Bash).
• Cluster management technologies knowledge and bonus credit for BCM (Base Command Manager).
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Senior Solutions Architect, Cloud Infrastructure and DevOps — NVIS
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is the world leader in computer graphics, artificial intelligence, and accelerated computing. For over 25 years, we have been at the forefront of research and engineering around the greatest advances in technology. Our history of innovation drives us to solve the world's hardest problems.
NVIDIA is looking for a Senior Cloud Infrastructure/DevOps Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Design, implement and maintain large-scale HPC/AI clusters with monitoring, logging and alerting
• Manage Linux job/workload schedulers and orchestration tools
• Develop and maintain continuous integration and delivery pipelines
• Develop tooling to automate deployment and management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources
• Deploy monitoring solutions for the servers, network and storage
• Perform troubleshooting bottom-up from bare metal, operating system, software stack and application level
• Being a technical resource, develop, re-define and document standard methodologies to share with internal teams
• Support Research & Development activities and engage in POCs/POVs for future improvements
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• Knowledge of HPC and AI solution technologies from CPUs and GPUs to high speed interconnects and supporting software.
• Direct design, implementation and management experience with cloud computing platforms (e.g. AWS, Azure, Google Cloud).
• Experience with job scheduling workloads and orchestration technologies such as Slurm, Kubernetes and Singularity.
• Excellent knowledge of Windows and Linux (Redhat/CentOS and Ubuntu) networking (sockets, firewalld, iptables, wireshark, etc.) and internals, ACLs and OS level security protection and common protocols (e.g. TCP, DHCP, DNS, etc.).
• Experience with multiple storage solutions such as Lustre, GPFS, zfs and xfs.
• Familiarity with newer and emerging storage technologies.
• Python programming and bash scripting experience.
• Comfortable with automation and configuration management tools including Jenkins, Ansible, Puppet/Chef, etc.
• Deep knowledge of Networking Protocols like InfiniBand, Ethernet.
• Deep understanding and experience with virtual systems (e.g. VMware, Hyper-V, KVM, or Citrix).
• Strong written, verbal, and listening skills in English are critical.
Ways To Stand Out from the Crowd
• Knowledge of CPU and/or GPU architecture
• Knowledge of Kubernetes, container-related microservice technologies
• Experience with GPU-focused hardware/software (DGX, CUDA)
• Background with RDMA (InfiniBand or RoCE) fabrics
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for a Senior Cloud Infrastructure/DevOps Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Design, implement and maintain large-scale HPC/AI clusters with monitoring, logging and alerting
• Manage Linux job/workload schedulers and orchestration tools
• Develop and maintain continuous integration and delivery pipelines
• Develop tooling to automate deployment and management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources
• Deploy monitoring solutions for the servers, network and storage
• Perform troubleshooting bottom-up from bare metal, operating system, software stack and application level
• Being a technical resource, develop, re-define and document standard methodologies to share with internal teams
• Support Research & Development activities and engage in POCs/POVs for future improvements
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• Knowledge of HPC and AI solution technologies from CPUs and GPUs to high speed interconnects and supporting software.
• Direct design, implementation and management experience with cloud computing platforms (e.g. AWS, Azure, Google Cloud).
• Experience with job scheduling workloads and orchestration technologies such as Slurm, Kubernetes and Singularity.
• Excellent knowledge of Windows and Linux (Redhat/CentOS and Ubuntu) networking (sockets, firewalld, iptables, wireshark, etc.) and internals, ACLs and OS level security protection and common protocols (e.g. TCP, DHCP, DNS, etc.).
• Experience with multiple storage solutions such as Lustre, GPFS, zfs and xfs.
• Familiarity with newer and emerging storage technologies.
• Python programming and bash scripting experience.
• Comfortable with automation and configuration management tools including Jenkins, Ansible, Puppet/Chef, etc.
• Deep knowledge of Networking Protocols like InfiniBand, Ethernet.
• Deep understanding and experience with virtual systems (e.g. VMware, Hyper-V, KVM, or Citrix).
• Strong written, verbal, and listening skills in English are critical.
Ways To Stand Out from the Crowd
• Knowledge of CPU and/or GPU architecture
• Knowledge of Kubernetes, container-related microservice technologies
• Experience with GPU-focused hardware/software (DGX, CUDA)
• Background with RDMA (InfiniBand or RoCE) fabrics
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Senior Solutions Architect, Infiniband and Ethernet Networking — NVIS
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is the world leader in computer graphics, artificial intelligence, and accelerated computing. For over 25 years, we have been at the forefront of research and engineering around the greatest advances in technology. Our history of innovation drives us to solve the world's hardest problems.
NVIDIA is looking for a Senior Networking (ETH/IB) Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Primary responsibilities will include building AI/HPC infrastructure for new and existing customers.
• Support operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, real-time monitoring, logging, and alerting.
• Engage in and improve the whole lifecycle of services — from inception and design through deployment, operation, and refinement.
• Maintain services once they are live by measuring and monitoring availability, latency, and overall system health.
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields.
• At least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• 8+ years of experience with configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance including medium- to large-scale HPC/AI network environments.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, troubleshooting, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Driven focus on customer needs and satisfaction.
• Self-motivated with excellent leadership skills, including working with customers.
• Extensive knowledge of automation, delivering fully automated network provisioning solutions using Ansible, Salt, and Python.
• Strong written, verbal, and listening skills in English are essential.
Ways To Stand Out from the Crowd
• Linux or Networking certifications
• Experience with high-performance computing architectures
• Understanding of how job schedulers (Slurm, PBS) work
• Proven knowledge of Python or Bash
• Cluster management technologies knowledge, with bonus credit for BCM (Base Command Manager)
• Experience with GPU (Graphics Processing Unit) focused hardware/software as well as experience with MPI (Message Passing Interface)
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for a Senior Networking (ETH/IB) Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Primary responsibilities will include building AI/HPC infrastructure for new and existing customers.
• Support operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, real-time monitoring, logging, and alerting.
• Engage in and improve the whole lifecycle of services — from inception and design through deployment, operation, and refinement.
• Maintain services once they are live by measuring and monitoring availability, latency, and overall system health.
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields.
• At least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• 8+ years of experience with configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance including medium- to large-scale HPC/AI network environments.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, troubleshooting, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Driven focus on customer needs and satisfaction.
• Self-motivated with excellent leadership skills, including working with customers.
• Extensive knowledge of automation, delivering fully automated network provisioning solutions using Ansible, Salt, and Python.
• Strong written, verbal, and listening skills in English are essential.
Ways To Stand Out from the Crowd
• Linux or Networking certifications
• Experience with high-performance computing architectures
• Understanding of how job schedulers (Slurm, PBS) work
• Proven knowledge of Python or Bash
• Cluster management technologies knowledge, with bonus credit for BCM (Base Command Manager)
• Experience with GPU (Graphics Processing Unit) focused hardware/software as well as experience with MPI (Message Passing Interface)
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Senior Solutions Architect, Industry Customer Success and Partnership — NVIS
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is the world leader in computer graphics, artificial intelligence, and accelerated computing. For over 25 years, we have been at the forefront of research and engineering around the greatest advances in technology. Our history of innovation drives us to solve the world's hardest problems.
NVIDIA is looking for a Senior Industry SA/Customer Success/Partnership Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Engage with NVIDIA Cloud Partners (NCP) to drive initiatives, shape new business opportunities, and cultivate collaborations in the field of Artificial Intelligence (AI), contributing to the advancement of our cloud solutions.
• Identify and pursue new business opportunities for NVIDIA products and technology solutions in data centers and artificial intelligence applications, closely collaborating with Engineering, Product Management, and Sales teams.
• Serve as a technical specialist for GPU and networking products, collaborating closely with sales account managers to secure design wins and actively engaging with customer engineers, management, and architects at key accounts.
• Conduct regular technical customer meetings to discuss project and product roadmaps, features, and introduce new technology solutions.
• Develop custom product demonstrations and Proof of Concepts (POCs) addressing critical business needs, supporting sales efforts.
• Demonstrate strong technical presentation skills in English, confidence in developing Proofs-of-Concept, and a customer-focused mentality, coupled with good organization skills, a logical approach to problem-solving and effective time management for handling concurrent requests.
• Manage technical project aspects of complex data center deployments, including design-in opportunities and responding to RFP/RFI proposals.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• Ideal candidate possesses 8+ years of Solution Architect or similar Sales Engineering experience, demonstrating motivation and skills to drive the technical pre-sales process.
• Deep expertise in datacenter engineering, GPU, networking, including a solid understanding of network topologies, server and storage architecture.
• Proficiency in system-level aspects, encompassing Operating Systems, Linux kernel drivers, GPUs, NICs, and hardware architecture.
• Demonstrated expertise in cloud orchestration software and job schedulers, including platforms like Kubernetes, Docker Swarm, and HPC-specific schedulers such as Slurm.
• Familiarity with cloud-native technologies and their integration with traditional infrastructure is essential.
Ways To Stand Out from the Crowd
• Knowledge in InfiniBand and Artificial Intelligence infrastructure.
• Demonstrated hands-on experience with NVIDIA systems/SDKs (e.g., CUDA), NVIDIA Networking technologies (e.g., DPU, RoCE, InfiniBand), ARM CPU solutions, coupled with proficiency in C/C++ programming, parallel programming, and GPU development.
• Knowledge of DevOps/MLOps technologies such as Docker/containers, Kubernetes, data center compute/network/storage deployments.
• Large-scale systems management experience.
• Experience with Python programming and AI workflow development and deployment (training/inference) would be advantageous.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for a Senior Industry SA/Customer Success/Partnership Solutions Architect to join its NVIDIA Infrastructure Specialist Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale Networking projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You'll Be Doing
• Engage with NVIDIA Cloud Partners (NCP) to drive initiatives, shape new business opportunities, and cultivate collaborations in the field of Artificial Intelligence (AI), contributing to the advancement of our cloud solutions.
• Identify and pursue new business opportunities for NVIDIA products and technology solutions in data centers and artificial intelligence applications, closely collaborating with Engineering, Product Management, and Sales teams.
• Serve as a technical specialist for GPU and networking products, collaborating closely with sales account managers to secure design wins and actively engaging with customer engineers, management, and architects at key accounts.
• Conduct regular technical customer meetings to discuss project and product roadmaps, features, and introduce new technology solutions.
• Develop custom product demonstrations and Proof of Concepts (POCs) addressing critical business needs, supporting sales efforts.
• Demonstrate strong technical presentation skills in English, confidence in developing Proofs-of-Concept, and a customer-focused mentality, coupled with good organization skills, a logical approach to problem-solving and effective time management for handling concurrent requests.
• Manage technical project aspects of complex data center deployments, including design-in opportunities and responding to RFP/RFI proposals.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields with at least 8 years of work or research experience in networking fundamentals, TCP/IP stack, and data center architecture.
• Ideal candidate possesses 8+ years of Solution Architect or similar Sales Engineering experience, demonstrating motivation and skills to drive the technical pre-sales process.
• Deep expertise in datacenter engineering, GPU, networking, including a solid understanding of network topologies, server and storage architecture.
• Proficiency in system-level aspects, encompassing Operating Systems, Linux kernel drivers, GPUs, NICs, and hardware architecture.
• Demonstrated expertise in cloud orchestration software and job schedulers, including platforms like Kubernetes, Docker Swarm, and HPC-specific schedulers such as Slurm.
• Familiarity with cloud-native technologies and their integration with traditional infrastructure is essential.
Ways To Stand Out from the Crowd
• Knowledge in InfiniBand and Artificial Intelligence infrastructure.
• Demonstrated hands-on experience with NVIDIA systems/SDKs (e.g., CUDA), NVIDIA Networking technologies (e.g., DPU, RoCE, InfiniBand), ARM CPU solutions, coupled with proficiency in C/C++ programming, parallel programming, and GPU development.
• Knowledge of DevOps/MLOps technologies such as Docker/containers, Kubernetes, data center compute/network/storage deployments.
• Large-scale systems management experience.
• Experience with Python programming and AI workflow development and deployment (training/inference) would be advantageous.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Senior Solution Architect, HPC and AI — NVIS
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is the world leader in computer graphics, artificial intelligence, and accelerated computing. For over 25 years, we have been at the forefront of research and engineering around the greatest advances in technology. Our history of innovation drives us to solve the world's hardest problems.
NVIDIA is looking for a Senior HPC/AI Solutions Architect to join its NVIDIA Infrastructure Specialists Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale AI/HPC projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You’ll Be Doing
• Primary responsibilities will include building robust AI/HPC infrastructure for new and existing customers.
• Support operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, training stability, real-time monitoring, logging, and alerting.
• Engage in and improve the whole lifecycle of services from inception and design through deployment, operation, and refinement.
• Your primary focus would be on understanding the AI workload and how it interacts with other parts of the system, like networking, storage, deep learning frameworks, data cleaning tools, etc.
• Help maintain services once they are live by measuring and monitoring progress of AI jobs and helping engineering design solutions for more robust training at scale.
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields, with at least 8 years of work or research experience with Python/C++/other software development.
• Track record of medium- to large-scale AI training and understanding of key libraries used for NLP/LLM/VLA training (NeMo Framework, DeepSpeed, etc.).
• Experience with integration and deployment of software products in production enterprise environments, and microservices software architecture.
• You are excited to work with multiple levels and teams across organizations (Engineering, Product, Sales and Marketing teams).
• Capable of working in a constantly evolving environment without losing focus.
• Ability to multitask in a fast-paced environment.
• Driven with strong analytical and problem-solving skills.
• Strong time-management and organization skills for coordinating multiple initiatives, priorities and implementations of new technology and products into very sophisticated projects.
• You are a self-starter with a demeanor for growth, passion for continuous learning and sharing findings across the team.
• Technical leadership and strong understanding of NVIDIA technologies, and success in working with customers.
• Excellent verbal, written communication, and technical presentation skills in English.
Ways To Stand Out from the Crowd
• Experience working with large transformer-based architectures for NLP, CV, ASR or others.
• Experience running large-scale distributed DL training.
• Understanding of HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.
• Proven experience with one or more Tier-1 Clouds (AWS, Azure, GCP or OCI) and cloud-native architectures and software.
• Expertise with parallel filesystems (e.g. Lustre, GPFS, BeeGFS, WekaIO) and high-speed interconnects (InfiniBand, Omni Path, and Gig-E).
• Strong coding and debugging skills, and demonstrated expertise in one or more of the following areas: Machine Learning, Deep Learning, Slurm, Docker/Kubernetes, Kubernetes, Singularity, MPI, MLOps, LLMOps, Ansible, Terraform, and other high-performance AI cluster solutions.
• Technical leadership and strong understanding of NVIDIA technologies including GX Cloud, NVIDIA AI Enterprise AI Software, Base Command Manager, NEMO and NVIDIA Inference Microservices.
• Success in working with customers using NVIDIA technologies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for a Senior HPC/AI Solutions Architect to join its NVIDIA Infrastructure Specialists Team. Academic and commercial groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams, to analyze, define and implement large-scale AI/HPC projects. The scope of these efforts includes a combination of Networking, System Design and Automation and being the face to the customer!
What You’ll Be Doing
• Primary responsibilities will include building robust AI/HPC infrastructure for new and existing customers.
• Support operational and reliability aspects of large-scale AI clusters, focusing on performance at scale, training stability, real-time monitoring, logging, and alerting.
• Engage in and improve the whole lifecycle of services from inception and design through deployment, operation, and refinement.
• Your primary focus would be on understanding the AI workload and how it interacts with other parts of the system, like networking, storage, deep learning frameworks, data cleaning tools, etc.
• Help maintain services once they are live by measuring and monitoring progress of AI jobs and helping engineering design solutions for more robust training at scale.
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• BS/MS/PhD or equivalent experience in Computer Science, Data Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering fields, with at least 8 years of work or research experience with Python/C++/other software development.
• Track record of medium- to large-scale AI training and understanding of key libraries used for NLP/LLM/VLA training (NeMo Framework, DeepSpeed, etc.).
• Experience with integration and deployment of software products in production enterprise environments, and microservices software architecture.
• You are excited to work with multiple levels and teams across organizations (Engineering, Product, Sales and Marketing teams).
• Capable of working in a constantly evolving environment without losing focus.
• Ability to multitask in a fast-paced environment.
• Driven with strong analytical and problem-solving skills.
• Strong time-management and organization skills for coordinating multiple initiatives, priorities and implementations of new technology and products into very sophisticated projects.
• You are a self-starter with a demeanor for growth, passion for continuous learning and sharing findings across the team.
• Technical leadership and strong understanding of NVIDIA technologies, and success in working with customers.
• Excellent verbal, written communication, and technical presentation skills in English.
Ways To Stand Out from the Crowd
• Experience working with large transformer-based architectures for NLP, CV, ASR or others.
• Experience running large-scale distributed DL training.
• Understanding of HPC systems: data center design, high speed interconnect InfiniBand, Cluster Storage and Scheduling related design and/or management experience.
• Proven experience with one or more Tier-1 Clouds (AWS, Azure, GCP or OCI) and cloud-native architectures and software.
• Expertise with parallel filesystems (e.g. Lustre, GPFS, BeeGFS, WekaIO) and high-speed interconnects (InfiniBand, Omni Path, and Gig-E).
• Strong coding and debugging skills, and demonstrated expertise in one or more of the following areas: Machine Learning, Deep Learning, Slurm, Docker/Kubernetes, Kubernetes, Singularity, MPI, MLOps, LLMOps, Ansible, Terraform, and other high-performance AI cluster solutions.
• Technical leadership and strong understanding of NVIDIA technologies including GX Cloud, NVIDIA AI Enterprise AI Software, Base Command Manager, NEMO and NVIDIA Inference Microservices.
• Success in working with customers using NVIDIA technologies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Solutions Architect — InfiniBand and HPC
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is looking for a Senior HPC Engineer to join its Professional Services team. Academic, commercial and government groups around the world are using NVIDIA products to revolutionize deep learning and data analytics, and to power data centers. Join the team building many of the largest and fastest AI/HPC systems in the world! NVIDIA is looking for someone with the ability to work on a dynamic, customer-focused team that requires excellent interpersonal skills. This role will be interacting with customers, partners and internal teams to analyze, define, and implement large-scale AI/HPC projects. These efforts include a combination of networking, system design and automation and validation.
What You Will Be Doing
• Primary responsibilities will include deploying, managing, and validating AI/HPC infrastructure in Linux-based environments for new and existing customers.
• Be the domain expert with customers during planning calls through implementation.
• Create and handover related documentation and perform knowledge transfers required to support customers as they roll out some of the most sophisticated systems in the world!
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• 5+ years providing in-depth support and deployment services; solving problems for hardware and software products.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, troubleshooting, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Experience in configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance (ibdiag, etc.) and UFM (Unified Fabric Manager).
• Experience with benchmarking tools such as HPL, NCCL tests, MLPERF.
• Scripting proficiency (Bash, Python, Ansible, etc.) and Automation tooling background (Ansible, Puppet, etc.).
• Familiarity with schedulers such as SLURM, LSF, UGE, etc.
• Kubernetes experience.
• Excellent interpersonal communication skills and the ability to deliver resolutions for customer issues as they arise.
• Strong self-organizational skills and the ability to prioritize/multi-task easily with limited supervision.
• Willingness to travel to customer sites within the United States.
• Minimum of a four-year degree from an accredited university or college in Computer Science, Electrical or Computer Engineering, or equivalent experience.
Ways To Stand Out from the Crowd
• Cluster management technologies knowledge, with bonus credit for BCM (Base Command Manager).
• Experience with GPU (Graphics Processing Unit) focused hardware/software.
• Experience with MPI (Message Passing Interface).
• Storage technologies such as Lustre or GPFS.
• Familiarity with Dell and Supermicro GPU platforms.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 116,000 USD – 230,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You Will Be Doing
• Primary responsibilities will include deploying, managing, and validating AI/HPC infrastructure in Linux-based environments for new and existing customers.
• Be the domain expert with customers during planning calls through implementation.
• Create and handover related documentation and perform knowledge transfers required to support customers as they roll out some of the most sophisticated systems in the world!
• Provide feedback to internal teams such as opening bugs, documenting workarounds, and suggesting improvements.
What We Need To See
• 5+ years providing in-depth support and deployment services; solving problems for hardware and software products.
• Knowledge and experience with Linux system administration/dev ops, process management, package management, task scheduling, kernel management, boot procedures, troubleshooting, performance reporting/optimization/logging, and network-routing/advanced networking (tuning and monitoring).
• Experience in configuring, testing, validating, and issue resolution of LAN and InfiniBand networking, including use of validation tools for InfiniBand health and performance (ibdiag, etc.) and UFM (Unified Fabric Manager).
• Experience with benchmarking tools such as HPL, NCCL tests, MLPERF.
• Scripting proficiency (Bash, Python, Ansible, etc.) and Automation tooling background (Ansible, Puppet, etc.).
• Familiarity with schedulers such as SLURM, LSF, UGE, etc.
• Kubernetes experience.
• Excellent interpersonal communication skills and the ability to deliver resolutions for customer issues as they arise.
• Strong self-organizational skills and the ability to prioritize/multi-task easily with limited supervision.
• Willingness to travel to customer sites within the United States.
• Minimum of a four-year degree from an accredited university or college in Computer Science, Electrical or Computer Engineering, or equivalent experience.
Ways To Stand Out from the Crowd
• Cluster management technologies knowledge, with bonus credit for BCM (Base Command Manager).
• Experience with GPU (Graphics Processing Unit) focused hardware/software.
• Experience with MPI (Message Passing Interface).
• Storage technologies such as Lustre or GPFS.
• Familiarity with Dell and Supermicro GPU platforms.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 116,000 USD – 230,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
Remote
Full Time
Solutions Architect, Spectrum-X — DPU/RoCE-Centric
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA networking designs and manufactures high-performance networking equipment that enables the most powerful supercomputers in the largest data centers in the world. With a distributed collection of NVIDIA GPUs inter-connected by networking solutions such as InfiniBand, Ethernet, or RoCE (RDMA over Converged Ethernet), we make powerful ML/AI platforms possible. We believe in our people and our products. We are seeking motivated, personable, and independent individuals to join our team!
We seek experienced software networking engineers to help support our groundbreaking, innovative technologies that make AI workloads in large clusters even more performant. As a networking Solutions Architect at NVIDIA, you will have agency and palpable effects on the business, and work closely with customers and R&D teams.
What You’ll Be Doing
• Support networking technologies such as Spectrum-X and work with customers on their technical challenges and requirements using said technologies during pre-sales activities
• Develop proof-of-concept materials for innovative technologies for use by early adopters
• Gain customers’ trust and understand their needs to help design and deploy cutting-edge NVIDIA networking platforms to run AI and HPC workloads
• Address sophisticated and highly-visible customer issues
• Work closely with R&D teams to develop new features for customers
• Help with product requirements alongside engineering and product marketing
What We Need To See
• 10+ years of experience with computer software, knowledge of Linux kernel, Ethernet and IP protocols
• BS, master's, or PhD in Computer Science, Electrical Engineering, or related technical field (or equivalent experience)
• Extensive knowledge in, and experience with, debugging issues involving Ethernet Switches/Routers and network protocols
• Strong analytical and problem-solving skills, with attention to detail
• Ability to work collaboratively and be willing to work directly with customers
Ways To Stand Out from the Crowd
• Coding development experience with multiple programming languages (from low-level C programming language to high-level languages such as Perl, Python, and shell scripts)
• Knowledge in Cloud infrastructure and AI workflows
• Linux Environment and Linux Networking
• Familiarity with NVIDIA DPUs, RoCE, and RDMA concepts
NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. The high-speed networking solutions enable GPUs for large-scale deployments. Our work opens new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous vehicles. NVIDIA is looking for excellent people like you to help us accelerate the next wave of artificial intelligence. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and self-motivated, we want to hear from you!
The base salary range is 220,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
We seek experienced software networking engineers to help support our groundbreaking, innovative technologies that make AI workloads in large clusters even more performant. As a networking Solutions Architect at NVIDIA, you will have agency and palpable effects on the business, and work closely with customers and R&D teams.
What You’ll Be Doing
• Support networking technologies such as Spectrum-X and work with customers on their technical challenges and requirements using said technologies during pre-sales activities
• Develop proof-of-concept materials for innovative technologies for use by early adopters
• Gain customers’ trust and understand their needs to help design and deploy cutting-edge NVIDIA networking platforms to run AI and HPC workloads
• Address sophisticated and highly-visible customer issues
• Work closely with R&D teams to develop new features for customers
• Help with product requirements alongside engineering and product marketing
What We Need To See
• 10+ years of experience with computer software, knowledge of Linux kernel, Ethernet and IP protocols
• BS, master's, or PhD in Computer Science, Electrical Engineering, or related technical field (or equivalent experience)
• Extensive knowledge in, and experience with, debugging issues involving Ethernet Switches/Routers and network protocols
• Strong analytical and problem-solving skills, with attention to detail
• Ability to work collaboratively and be willing to work directly with customers
Ways To Stand Out from the Crowd
• Coding development experience with multiple programming languages (from low-level C programming language to high-level languages such as Perl, Python, and shell scripts)
• Knowledge in Cloud infrastructure and AI workflows
• Linux Environment and Linux Networking
• Familiarity with NVIDIA DPUs, RoCE, and RDMA concepts
NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. The high-speed networking solutions enable GPUs for large-scale deployments. Our work opens new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous vehicles. NVIDIA is looking for excellent people like you to help us accelerate the next wave of artificial intelligence. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and self-motivated, we want to hear from you!
The base salary range is 220,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Solutions Architect, Cloud Providers and Hyperscale
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is looking for an experienced Solutions Architect to assist customers building infrastructure for AI and HPC. Do you want to be part of a team that brings Artificial Intelligence (AI) hardware and software technologies to production in the field? We are looking for a Solutions Architect to join the NVIDIA team focused on supporting accelerated infrastructure for AI, Machine Learning, and HPC. As part of the NVIDIA Solutions Architecture team, you will be driving end-to-end technology solution integration with some of NVIDIA’s most strategic technology customers as well as offering recommendations to business and engineering teams based on product strategy.
What You’ll Be Doing
• Work with Cloud Providers and Hyperscalers to develop and demonstrate solutions based on NVIDIA’s groundbreaking software and hardware technologies.
• Partner with Sales Account Managers or Developer Relations Managers to identify and secure business opportunities for NVIDIA products and solutions.
• Be the go-to technical resource for customers building cloud infrastructure.
• Conduct regular technical customer meetings for project/product details, feature discussions, intro to new technologies, and debugging sessions.
• Work with customers to build PoCs for solutions to address critical business needs by building out networking and compute infrastructure.
• Help develop overall plans and blueprints for customers developing their own cloud infrastructure.
• Prepare and deliver technical content to customers, including presentations, workshops, etc.
• Analyze and develop solutions for customer networking performance issues.
What We Need To See
• BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
• Motivation and skills to help drive technical pre-sales activities.
• 5+ years of Solutions Engineering (or similar Sales Engineering roles) experience.
• Experience building and/or deploying large-scale cluster environments.
• Practical knowledge in building data center scale systems for AI and HPC.
• Effective time management and capacity to balance multiple tasks.
• Ability to communicate ideas clearly through documents, presentations, etc.
Ways To Stand Out from the Crowd
• External customer-facing skill-set and background.
• Hands-on experience with NVIDIA networking hardware, both Ethernet and Infiniband, NICs/HCAs and switches, and networking software stacks.
• Hands-on experience with GPU systems in general, including but not limited to performance testing, AI benchmarking, etc.
• Ability to think creatively to debug and solve problems.
• Large-scale GPU infrastructure deployments with TCP and/or RDMA, including cloud infrastructure deployments.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
#LI-Hybrid
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You’ll Be Doing
• Work with Cloud Providers and Hyperscalers to develop and demonstrate solutions based on NVIDIA’s groundbreaking software and hardware technologies.
• Partner with Sales Account Managers or Developer Relations Managers to identify and secure business opportunities for NVIDIA products and solutions.
• Be the go-to technical resource for customers building cloud infrastructure.
• Conduct regular technical customer meetings for project/product details, feature discussions, intro to new technologies, and debugging sessions.
• Work with customers to build PoCs for solutions to address critical business needs by building out networking and compute infrastructure.
• Help develop overall plans and blueprints for customers developing their own cloud infrastructure.
• Prepare and deliver technical content to customers, including presentations, workshops, etc.
• Analyze and develop solutions for customer networking performance issues.
What We Need To See
• BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or other Engineering fields or equivalent experience.
• Motivation and skills to help drive technical pre-sales activities.
• 5+ years of Solutions Engineering (or similar Sales Engineering roles) experience.
• Experience building and/or deploying large-scale cluster environments.
• Practical knowledge in building data center scale systems for AI and HPC.
• Effective time management and capacity to balance multiple tasks.
• Ability to communicate ideas clearly through documents, presentations, etc.
Ways To Stand Out from the Crowd
• External customer-facing skill-set and background.
• Hands-on experience with NVIDIA networking hardware, both Ethernet and Infiniband, NICs/HCAs and switches, and networking software stacks.
• Hands-on experience with GPU systems in general, including but not limited to performance testing, AI benchmarking, etc.
• Ability to think creatively to debug and solve problems.
• Large-scale GPU infrastructure deployments with TCP and/or RDMA, including cloud infrastructure deployments.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
#LI-Hybrid
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Account Manager, Manufacturing — Robotics and Humanoids
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionWould you like to be a part of one of the most exciting companies in technology?
NVIDIA, the world leader in Visual and Accelerated Computing, is seeking an Enterprise Sales Account Manager with a proven track record in selling complex technology solutions to our customers within the Robotics industry, with a focus on the Humanoid Robotics market segment.
This position requires a close working relationship with NVIDIA's Solution Architects, Business Development leaders and Developer Relationship Managers to drive revenue for the company. If you have a passion about how technology is driving dynamic changes in exciting segments, like Artificial Intelligence, Deep Learning, Autonomous Systems and Advanced Software Development, we'd like to talk with you!
What You'll Be Doing
• Positioning and selling of NVIDIA’s 3-computer Solution for Robotics & Humanoid Accounts — Training, Simulation & Edge-Runtime
• Have an understanding and clear delivery of NVIDIA’s value proposition, key features, product messages, and positioning, especially around articulating the value of NVIDIA’s Robotics platforms to clients — https://www.nvidia.com/en-us/industries/robotics/
• Work with NVIDIA teams who have existing relationships with clients to build and nurture the sales pipeline
• Maintain and submit accurate forecasts
• Present NVIDIA solutions, overcome objections, negotiate, and close business over the phone, TEAMS and in person
• Work closely with NVIDIA Channel, Business Development, Developer Relationship Managers, Sales Leadership & Marketing to build your business and facilitate sales opportunities
• Experience working with Startup, Mid-Market, and/or Enterprise Customers
• Work closely with partners, OEMs, CSPs, and ecosystem partners to complete go-to-market plans
• Lead all aspects of the selling cycle to help customers expand revenue
• Develop excellent strategic relationships with customers and ecosystem partners, including C-Level relationships
What We Need To See
• Bachelor’s degree and/or equivalent experience
• 8+ years of sales experience, with a minimum of 2 of those years in a sales role in dealing with Artificial Intelligence or Autonomous Systems
• 4+ years of technology quota carrying sales experience responsible for the full sales cycle (prospecting, customer presentation/demos, negotiation, and closing the sale)
• Understanding of NVIDIA key platform technologies: AI Foundry/Factory, Omniverse/Digital Twins and Isaac Platform/Autonomous Systems
• Experience in selling and having established relationships within the Manufacturing and Robotics industrial vertical — with both customers and Ecosystem Partners
• Shown proven success in rolling up a forecast to sales leadership
• Experience with going to market and teaming with Technology Industry Channel Partners
• Excellent interpersonal and presentation communication and closing skills
• Skilled at selling in On-Prem or Cloud environments, including working with CSP, ISV, and OEM’s
• Salesforce.com experience is required
• Ability to travel as needed. Job is based out of NVIDIA HQ in Santa Clara, CA.
Ways To Stand Out from the Crowd
• Current or past experience in working with or selling Autonomous Robots, Robotics Simulation Solutions, Humanoids or AMRs (Autonomous Mobile Robots).
• Experience in Robotics frameworks like ROS/RViz, NVIDIA Isaac, Gazebo, and concepts like Functional Safety.
• Understanding of VLA & LBM Models and their role in Autonomous Robotics.
• Knowledge of Robotics technology, like sensors, including RGB cameras, depth sensors, LiDAR, Radar Robotic arms, grippers, end-effectors, Actuators, etc.
• Have polished executive presence with a track record of participating in C-Level strategy sessions and execution of those strategies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most intelligent and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
The cash compensation range is 196,000 USD – 299,000 USD, with 85% paid through base salary and 15% variable compensation. Your cash compensation will be determined based on your location, experience and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA, the world leader in Visual and Accelerated Computing, is seeking an Enterprise Sales Account Manager with a proven track record in selling complex technology solutions to our customers within the Robotics industry, with a focus on the Humanoid Robotics market segment.
This position requires a close working relationship with NVIDIA's Solution Architects, Business Development leaders and Developer Relationship Managers to drive revenue for the company. If you have a passion about how technology is driving dynamic changes in exciting segments, like Artificial Intelligence, Deep Learning, Autonomous Systems and Advanced Software Development, we'd like to talk with you!
What You'll Be Doing
• Positioning and selling of NVIDIA’s 3-computer Solution for Robotics & Humanoid Accounts — Training, Simulation & Edge-Runtime
• Have an understanding and clear delivery of NVIDIA’s value proposition, key features, product messages, and positioning, especially around articulating the value of NVIDIA’s Robotics platforms to clients — https://www.nvidia.com/en-us/industries/robotics/
• Work with NVIDIA teams who have existing relationships with clients to build and nurture the sales pipeline
• Maintain and submit accurate forecasts
• Present NVIDIA solutions, overcome objections, negotiate, and close business over the phone, TEAMS and in person
• Work closely with NVIDIA Channel, Business Development, Developer Relationship Managers, Sales Leadership & Marketing to build your business and facilitate sales opportunities
• Experience working with Startup, Mid-Market, and/or Enterprise Customers
• Work closely with partners, OEMs, CSPs, and ecosystem partners to complete go-to-market plans
• Lead all aspects of the selling cycle to help customers expand revenue
• Develop excellent strategic relationships with customers and ecosystem partners, including C-Level relationships
What We Need To See
• Bachelor’s degree and/or equivalent experience
• 8+ years of sales experience, with a minimum of 2 of those years in a sales role in dealing with Artificial Intelligence or Autonomous Systems
• 4+ years of technology quota carrying sales experience responsible for the full sales cycle (prospecting, customer presentation/demos, negotiation, and closing the sale)
• Understanding of NVIDIA key platform technologies: AI Foundry/Factory, Omniverse/Digital Twins and Isaac Platform/Autonomous Systems
• Experience in selling and having established relationships within the Manufacturing and Robotics industrial vertical — with both customers and Ecosystem Partners
• Shown proven success in rolling up a forecast to sales leadership
• Experience with going to market and teaming with Technology Industry Channel Partners
• Excellent interpersonal and presentation communication and closing skills
• Skilled at selling in On-Prem or Cloud environments, including working with CSP, ISV, and OEM’s
• Salesforce.com experience is required
• Ability to travel as needed. Job is based out of NVIDIA HQ in Santa Clara, CA.
Ways To Stand Out from the Crowd
• Current or past experience in working with or selling Autonomous Robots, Robotics Simulation Solutions, Humanoids or AMRs (Autonomous Mobile Robots).
• Experience in Robotics frameworks like ROS/RViz, NVIDIA Isaac, Gazebo, and concepts like Functional Safety.
• Understanding of VLA & LBM Models and their role in Autonomous Robotics.
• Knowledge of Robotics technology, like sensors, including RGB cameras, depth sensors, LiDAR, Radar Robotic arms, grippers, end-effectors, Actuators, etc.
• Have polished executive presence with a track record of participating in C-Level strategy sessions and execution of those strategies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most intelligent and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
The cash compensation range is 196,000 USD – 299,000 USD, with 85% paid through base salary and 15% variable compensation. Your cash compensation will be determined based on your location, experience and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Senior Solutions Architect, Generative AI — Inference
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA is seeking outstanding AI Solutions Architects to assist and support customers that are building solutions with our newest AI technology. At NVIDIA, our solutions architects work across different teams and enjoy helping customers with the latest Accelerated Computing and Deep Learning software and hardware platforms. We're looking to grow our company, and build our teams with the smartest people in the world. Would you like to join us at the forefront of technological advancement? You will become a trusted technical advisor with our customers and work on exciting projects and proof-of-concepts focused on Generative AI and Large Language Models. You will also collaborate with a diverse set of internal teams on performance analysis and modeling of inference software. You should be comfortable working in a dynamic environment, and have experience with Generative AI, Large Language Models, Deep Learning and GPU technologies. This role is an excellent opportunity to work on an interdisciplinary team with the latest technologies at NVIDIA!
What You Will Be Doing
• Partnering with other solution architects, engineering, product and business teams.
• Understanding their strategies and technical needs and helping define high-value solutions.
• Dynamically engaging with developers, scientific researchers, and data scientists, which will give you experience across a range of technical areas.
• Strategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platform.
• Working closely with customers to help them adopt and build solutions using NVIDIA technology.
• Analyzing performance and power efficiency of deep learning inference workloads.
• Some travel to conferences and customers may be required.
What We Need To See
• BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering or related fields (or equivalent experience)
• 5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow
• Strong fundamentals in programming, optimizations and software design, especially in Python
• Strong problem-solving and debugging skills
• Excellent knowledge of theory and practice of Large Language Models and Deep Learning inference
• Excellent presentation, communication and collaboration skills
• Desire to be involved in multiple diverse and creative projects
Ways To Stand Out from the Crowd
• Experience with NVIDIA GPUs and software libraries, such as NVIDIA NeMo Framework, NVIDIA Triton Inference Server, TensorRT, TensorRT-LLM
• Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
• Familiarity with parallel programming and distributed computing platforms
• Prior experience with DL training at scale, deploying or optimizing DL inference in production
The base salary range is 148,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You Will Be Doing
• Partnering with other solution architects, engineering, product and business teams.
• Understanding their strategies and technical needs and helping define high-value solutions.
• Dynamically engaging with developers, scientific researchers, and data scientists, which will give you experience across a range of technical areas.
• Strategically partnering with lighthouse customers and industry-specific solution partners targeting our computing platform.
• Working closely with customers to help them adopt and build solutions using NVIDIA technology.
• Analyzing performance and power efficiency of deep learning inference workloads.
• Some travel to conferences and customers may be required.
What We Need To See
• BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or other Engineering or related fields (or equivalent experience)
• 5+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow
• Strong fundamentals in programming, optimizations and software design, especially in Python
• Strong problem-solving and debugging skills
• Excellent knowledge of theory and practice of Large Language Models and Deep Learning inference
• Excellent presentation, communication and collaboration skills
• Desire to be involved in multiple diverse and creative projects
Ways To Stand Out from the Crowd
• Experience with NVIDIA GPUs and software libraries, such as NVIDIA NeMo Framework, NVIDIA Triton Inference Server, TensorRT, TensorRT-LLM
• Excellent C/C++ programming skills, including debugging, profiling, code optimization, performance analysis, and test design
• Familiarity with parallel programming and distributed computing platforms
• Prior experience with DL training at scale, deploying or optimizing DL inference in production
The base salary range is 148,000 USD – 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Solutions Architect, DGX Cloud
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionDo you want to be part of the team that brings Artificial Intelligence (AI) emerging technology to the field? We are looking for a hardworking Solutions Architect (SA) to join the NVIDIA AI Enterprise (NVAIE) SA Segment Team. The mission of the NVAIE Segment team is to guide and enable the successful adoption at scale of DGX Cloud and NVIDIA AI Enterprise Software in production.
NVIDIA DGX Cloud is an AI platform for enterprise developers, optimized for the demands of generative AI. The DGX Cloud SA team is dedicated to shaping the future of DGX Cloud by actively gathering and incorporating customer feedback and product requirements. Our team will help optimize the onboarding process for DGX Cloud customers, ensuring fast time to insights and exceptional experience. Additionally, we will collaborate with internal teams to scale expertise and knowledge through training and the creation of repeatable guides. Our focus on building demos, qualifications, and assets will streamline the pre-sales process, ultimately increasing sales and adoption of DGX Cloud.
What You’ll Be Doing
• Work closely with DGX Cloud Customers, become their trusted technical advisor, advocate for their needs, and ensure they are successful in accomplishing their business goals with the platform.
• Accelerate customer onboarding and time to insights with DGX Cloud.
• Scale knowledge, reach, and opportunities by building and educating vertical teams and communities on DGX Cloud.
• Provide technical education and facilitate field product feedback to improve DGX Cloud.
• Enable successful first-time integration and deployment of NVAIE Emerging SW products with DGX Cloud.
What We Need To See
• Strong foundational expertise, from a BS, MS, or PhD degree in Engineering, Mathematics, Physics, Computer Science, Data Science, or similar (or equivalent experience).
• 5+ years of proven experience with one or more Tier-1 Clouds (AWS, Azure, GCP or OCI) and cloud-native architectures and software.
• Proven experience in technical leadership, strong understanding of NVIDIA technologies, and success in working with customers.
• Expertise with parallel filesystems (e.g. Lustre, GPFS, BeeGFS, WekaIO) and high-speed interconnects (InfiniBand, Omni Path, and Gig-E).
• Strong coding and debugging skills, and demonstrated expertise in one or more of the following areas: Machine Learning, Deep Learning, Slurm, Kubernetes, MPI, MLOps, LLMOps, Ansible, Terraform, and other high-performance AI cluster solutions.
• Proficiency in deploying GPU applications in Slurm and Kubernetes.
• Experience with high performance or large-scale computing environments.
Ways To Stand Out from the Crowd
• Hands-on experience with DGX Cloud, NVIDIA AI Enterprise AI Software, Base Command Manager, NEMO and NVIDIA Inference Microservices.
• Experience with integration and deployment of software products in production enterprise environments, and microservices software architecture.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA DGX Cloud is an AI platform for enterprise developers, optimized for the demands of generative AI. The DGX Cloud SA team is dedicated to shaping the future of DGX Cloud by actively gathering and incorporating customer feedback and product requirements. Our team will help optimize the onboarding process for DGX Cloud customers, ensuring fast time to insights and exceptional experience. Additionally, we will collaborate with internal teams to scale expertise and knowledge through training and the creation of repeatable guides. Our focus on building demos, qualifications, and assets will streamline the pre-sales process, ultimately increasing sales and adoption of DGX Cloud.
What You’ll Be Doing
• Work closely with DGX Cloud Customers, become their trusted technical advisor, advocate for their needs, and ensure they are successful in accomplishing their business goals with the platform.
• Accelerate customer onboarding and time to insights with DGX Cloud.
• Scale knowledge, reach, and opportunities by building and educating vertical teams and communities on DGX Cloud.
• Provide technical education and facilitate field product feedback to improve DGX Cloud.
• Enable successful first-time integration and deployment of NVAIE Emerging SW products with DGX Cloud.
What We Need To See
• Strong foundational expertise, from a BS, MS, or PhD degree in Engineering, Mathematics, Physics, Computer Science, Data Science, or similar (or equivalent experience).
• 5+ years of proven experience with one or more Tier-1 Clouds (AWS, Azure, GCP or OCI) and cloud-native architectures and software.
• Proven experience in technical leadership, strong understanding of NVIDIA technologies, and success in working with customers.
• Expertise with parallel filesystems (e.g. Lustre, GPFS, BeeGFS, WekaIO) and high-speed interconnects (InfiniBand, Omni Path, and Gig-E).
• Strong coding and debugging skills, and demonstrated expertise in one or more of the following areas: Machine Learning, Deep Learning, Slurm, Kubernetes, MPI, MLOps, LLMOps, Ansible, Terraform, and other high-performance AI cluster solutions.
• Proficiency in deploying GPU applications in Slurm and Kubernetes.
• Experience with high performance or large-scale computing environments.
Ways To Stand Out from the Crowd
• Hands-on experience with DGX Cloud, NVIDIA AI Enterprise AI Software, Base Command Manager, NEMO and NVIDIA Inference Microservices.
• Experience with integration and deployment of software products in production enterprise environments, and microservices software architecture.
The base salary range is 148,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
Account Developer Relations
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionAt NVIDIA, we’re solving the world’s most ambitious problems with our groundbreaking developments in Artificial Intelligence, High-Performance Computing and Visualization. We are looking for a Developer Relations Manager to work with our Independent Software Vendors (ISVs) and Service Providers to integrate our portfolio of GPU Accelerated Computing solutions, i.e. Machine Learning and Deep Learning, specifically Generative AI, and to build comprehensive multi-cloud, hybrid and on-prem solutions. We need a passionate, hard-working, and creative technologist who has the skills and strives to work in a fast-evolving technological environment that is always on the cutting edge of AI and High-Performance Computing. This individual is motivated, skilled in communication and collaboration, stays organized, and prioritizes achieving goals. Roles will include being a technical business development and accelerated computing platform evangelist to our largest ISVs. This includes integrating new or expanding existing solutions, co-architecting and advising on repeatable solutions, jointly scoping project requirements for Gen AI and LLMs, and understanding integrations with the ecosystem, and private/hybrid platforms, etc.
What You'll Be Doing
• Promote NVIDIA tools, libraries, and SDKs with ISV and Service Provider architects and developers.
• Deeply understand Gen AI workflows and LLM breakthroughs, evolving ecosystem and alliances, attend conferences, build a network of influencers, and track opportunities in progress.
• Discover new workflows, identify blockers to ISV adoption, and report back to the product teams.
• Drive early adoption of new products and support launch and go-to-market activities.
• Collaborate with NVIDIA partner managers, solution architects, industry business development managers, sales and marketing.
• We make heavy use of conferencing tools, but some travel is required. You are empowered to find the best way to get your job done and make our partners successful.
What We Need To See
• BS or MS in Engineering, Mathematics, Physics, or Computer Science (or equivalent experience).
• 10+ years of work-related experience in technical leadership roles and architecting GPU accelerated computing solutions.
• Deep understanding of Gen AI and LLM ecosystem of startups, ISVs, data stores, CSP services, and SaaS/platform offers.
• Strong analytical and problem-solving skills, with deep empathy for developers.
• Clear written and oral communication skills with proven track record to articulate value propositions to executive and technical audiences.
• Strong project management skills: ability to plan, prioritize, and drive forward multiple projects simultaneously, while engaging with internal and external stakeholders.
• Extensive knowledge and experience with recent advancements in LLMs and Gen AI.
Ways To Stand Out from the Crowd
• Experience developing with ML/DL frameworks and MLOps ecosystem of partners and solutions in the cloud and on-prem.
• Background with cloud-based solution designing, APIs and Microservices, orchestration platforms, storage solutions and data migration techniques.
• Experience with and empathy for software engineering and application development.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 180,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What You'll Be Doing
• Promote NVIDIA tools, libraries, and SDKs with ISV and Service Provider architects and developers.
• Deeply understand Gen AI workflows and LLM breakthroughs, evolving ecosystem and alliances, attend conferences, build a network of influencers, and track opportunities in progress.
• Discover new workflows, identify blockers to ISV adoption, and report back to the product teams.
• Drive early adoption of new products and support launch and go-to-market activities.
• Collaborate with NVIDIA partner managers, solution architects, industry business development managers, sales and marketing.
• We make heavy use of conferencing tools, but some travel is required. You are empowered to find the best way to get your job done and make our partners successful.
What We Need To See
• BS or MS in Engineering, Mathematics, Physics, or Computer Science (or equivalent experience).
• 10+ years of work-related experience in technical leadership roles and architecting GPU accelerated computing solutions.
• Deep understanding of Gen AI and LLM ecosystem of startups, ISVs, data stores, CSP services, and SaaS/platform offers.
• Strong analytical and problem-solving skills, with deep empathy for developers.
• Clear written and oral communication skills with proven track record to articulate value propositions to executive and technical audiences.
• Strong project management skills: ability to plan, prioritize, and drive forward multiple projects simultaneously, while engaging with internal and external stakeholders.
• Extensive knowledge and experience with recent advancements in LLMs and Gen AI.
Ways To Stand Out from the Crowd
• Experience developing with ML/DL frameworks and MLOps ecosystem of partners and solutions in the cloud and on-prem.
• Background with cloud-based solution designing, APIs and Microservices, orchestration platforms, storage solutions and data migration techniques.
• Experience with and empathy for software engineering and application development.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking individuals in the world working for us. If you're creative and autonomous, we want to hear from you!
The base salary range is 180,000 USD – 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Senior Account Manager, Digital Human and Speech NIMs — Gen AI
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionNVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology — and amazing people! Driving continued aggressive growth in our Enterprise business is a continued high priority for 2024 and beyond. In the rapidly growing world of Gen AI, multi-modality has become and is increasingly becoming a requirement. In addition to the Text modality, use cases demand Speech and Digital Human (or Digital Avatar) capabilities. We need a senior segment sales leader to aggressively drive these technologies in the Enterprise and win NVIDIA platform (Software + Hardware) adoption. You will drive GTM (go to market) aspects of the collaboration, including but not limited to defining our GTM strategy (industry, partners), sales enablement materials, train partner and field, working with the sales account team on creating and closing pipeline opportunities and providing feedback to the product team based on customer requirements.
Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world!
What You'll Be Doing
• Drive the Enterprise business growth of NVIDIA’s Digital Human and Speech AI NIMs/Enterprise solutions. As a senior sales leader, you will have demonstrated experience in creating and driving business growth, working with lighthouse customers for new solutions, and achieving significant volume and scale. This person will be an overlay sales lead and work in virtual teams — including account managers, solution architects, sales operations, industry business development, product management, product marketing and marketing campaign teams.
• Provide cross-functional Go-To Market (GTM) leadership to grow the business. Define, Document and Align the GTM team on sales strategies and motion.
• Build and execute on a robust opportunity pipeline and software revenue for NVIDIA’s Digital Human and Speech AI Enterprise solutions.
• Engage and interact with customers regularly (daily/weekly), develop relationships with decision makers — regular in-person interactions (EBC, customer visits). 10%-25% travel, domestic or international, as required.
• Define, Document and Drive a scale via partners strategy: GSIs (global system integrators), SDPs (solution/service delivery partners), SPs (solution providers) and OEMs. Win customers via GTM-focused partner discussions.
• Partner with geography sales teams and Industry business development leaders to drive demand, create and grow opportunities and accounts.
• Codify the customer value proposition and define the core sales tools and training to enable the field and channel.
• Provide feedback to product teams based on customer input by regions and enterprise verticals. Influence product roadmap.
• Establish and develop NVIDIA relationship with key CXOs, decision makers across the Enterprise.
What We Need To See
• Bachelor’s degree in engineering from a leading university (or equivalent experience). Master’s degree and/or MBA or equivalent experience is desirable.
• 15+ overall years of sales experience in technology systems and/or software products with strong background in Conversational/Enterprise AI, shaping value proposition at solution level.
• Demonstrated understanding of AI frameworks and customer adoption journey with AI.
• Possess a wide client contact base, deep domain expertise, and knowledge of sales trends in Enterprise software.
• Ability to lead and empower a virtual team of sales managers (and help recruit the best talent).
Ways To Stand Out from the Crowd
• Track record of successfully growing revenue for new innovative technology-based solutions.
• Deep experience in Conversational AI.
• Confirmed ability to work effectively in highly matrixed organization.
• Knowledge of pricing strategies and channel economics.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us.
The cash compensation range is 220,000 USD – 391,000 USD, with 85% paid through base salary and 15% variable compensation. Your cash compensation will be determined based on your location, experience and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world!
What You'll Be Doing
• Drive the Enterprise business growth of NVIDIA’s Digital Human and Speech AI NIMs/Enterprise solutions. As a senior sales leader, you will have demonstrated experience in creating and driving business growth, working with lighthouse customers for new solutions, and achieving significant volume and scale. This person will be an overlay sales lead and work in virtual teams — including account managers, solution architects, sales operations, industry business development, product management, product marketing and marketing campaign teams.
• Provide cross-functional Go-To Market (GTM) leadership to grow the business. Define, Document and Align the GTM team on sales strategies and motion.
• Build and execute on a robust opportunity pipeline and software revenue for NVIDIA’s Digital Human and Speech AI Enterprise solutions.
• Engage and interact with customers regularly (daily/weekly), develop relationships with decision makers — regular in-person interactions (EBC, customer visits). 10%-25% travel, domestic or international, as required.
• Define, Document and Drive a scale via partners strategy: GSIs (global system integrators), SDPs (solution/service delivery partners), SPs (solution providers) and OEMs. Win customers via GTM-focused partner discussions.
• Partner with geography sales teams and Industry business development leaders to drive demand, create and grow opportunities and accounts.
• Codify the customer value proposition and define the core sales tools and training to enable the field and channel.
• Provide feedback to product teams based on customer input by regions and enterprise verticals. Influence product roadmap.
• Establish and develop NVIDIA relationship with key CXOs, decision makers across the Enterprise.
What We Need To See
• Bachelor’s degree in engineering from a leading university (or equivalent experience). Master’s degree and/or MBA or equivalent experience is desirable.
• 15+ overall years of sales experience in technology systems and/or software products with strong background in Conversational/Enterprise AI, shaping value proposition at solution level.
• Demonstrated understanding of AI frameworks and customer adoption journey with AI.
• Possess a wide client contact base, deep domain expertise, and knowledge of sales trends in Enterprise software.
• Ability to lead and empower a virtual team of sales managers (and help recruit the best talent).
Ways To Stand Out from the Crowd
• Track record of successfully growing revenue for new innovative technology-based solutions.
• Deep experience in Conversational AI.
• Confirmed ability to work effectively in highly matrixed organization.
• Knowledge of pricing strategies and channel economics.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us.
The cash compensation range is 220,000 USD – 391,000 USD, with 85% paid through base salary and 15% variable compensation. Your cash compensation will be determined based on your location, experience and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Full Time
PhD Research Intern, Computer Architecture and Systems
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Computer Architecture or Systems focused Research Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the smartest people in the world. Would you like to join us at the forefront of technological advancement?
What You'll Be Doing
• Develop algorithms and design hardware and software extending the state of the art in computing, graphics, media processing, and other technology areas surrounding NVIDIA's business.
• Invent new techniques, technologies, methodologies, processes, and devices to enable new products or types of products. Deliverable results include prototypes, patents, leading to products and publications.
• Produce technology vision and the basis for products 5-10 years out. Focus should not be on products currently shipping or in development, except as to how they can be extended and improved.
What We Need To See
• Currently pursuing a PhD degree in relevant discipline(s) (CS, CE, EE, Physics, Math).
• Research experience in at least one of the following areas:
Computer Architecture
Operating Systems
Compilers
HPC
Systems
Parallel Computing
Distributed Computing
Networks — Large Scale System Design
Topologies
Routing protocols
Network Management
Resilience
Power Management
Resource Disaggregation
ASIC and VLSI
ASIC and VLSI Design Techniques
Machine Learning Accelerator Approaches
• Excellent programming skills in one or more of the following: C, C++, Perl, Python, and Rust.
• Familiarity with CUDA.
• A strong publication, patent, presentation, and research collaboration history is a plus.
• Excellent communication skills.
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the smartest people in the world. Would you like to join us at the forefront of technological advancement?
What You'll Be Doing
• Develop algorithms and design hardware and software extending the state of the art in computing, graphics, media processing, and other technology areas surrounding NVIDIA's business.
• Invent new techniques, technologies, methodologies, processes, and devices to enable new products or types of products. Deliverable results include prototypes, patents, leading to products and publications.
• Produce technology vision and the basis for products 5-10 years out. Focus should not be on products currently shipping or in development, except as to how they can be extended and improved.
What We Need To See
• Currently pursuing a PhD degree in relevant discipline(s) (CS, CE, EE, Physics, Math).
• Research experience in at least one of the following areas:
Computer Architecture
Operating Systems
Compilers
HPC
Systems
Parallel Computing
Distributed Computing
Networks — Large Scale System Design
Topologies
Routing protocols
Network Management
Resilience
Power Management
Resource Disaggregation
ASIC and VLSI
ASIC and VLSI Design Techniques
Machine Learning Accelerator Approaches
• Excellent programming skills in one or more of the following: C, C++, Perl, Python, and Rust.
• Familiarity with CUDA.
• A strong publication, patent, presentation, and research collaboration history is a plus.
• Excellent communication skills.
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
PhD Research Intern, Robotics and/or Autonomous Vehicles
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Robotics & AV Research focused Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
Intelligent machines powered by Artificial Intelligence that can learn, reason, and interact with people are no longer science fiction. An AI-powered robot can build 3D maps of its environment, detect and track real-life objects, learn from simulated environments, and understand language commands. This is truly an extraordinary time — the era of AI has begun. NVIDIA Research has several teams looking for world-class Research Interns including, but not limited to, the Robotics Research Lab in Seattle, Washington, and Santa Clara, California, the Generalist Embodied Agent Research (GEAR) group, and the Autonomous Vehicles Research Group.
The Robotics Research Lab is passionate about enabling robots to reach human-level dexterity, perception, and adaptability. We are a diverse and interdisciplinary team to work on core robotics topics ranging from control and perception to task planning and critical areas related to Sim2Real and large vision-language-action models. Our interns have the opportunity to publish original research.
The GEAR group collaborative research team that consistently produces influential works on multimodal foundation models, large-scale robot learning, game AI, and physical simulation. Our past projects include Eureka, VIMA, Voyager, MineDojo, MimicPlay, Prismer, and more. One of our team’s most recent milestones includes Project GR00T, a foundation model for humanoid robots. Your contributions will have a significant impact on our moonshot research projects and product roadmaps.
The Autonomous Vehicle Research Group brings together a diverse and interdisciplinary set of researchers to address core topics in vehicle autonomy ranging from perception, prediction, planning and control, to long-tail generalization and robustness, as well as advance the state of the art in a number of critical related fields such as foundation models, self-supervised learning, scenario generation and simulation, decision making under uncertainty, and the verification and validation of safety-critical AI systems.
What You Will Be Doing
• Work with experts in robotics and learning to define your research project.
• Design and implement advanced techniques for robot perception, planning, and control, as well as new AI models for humanoid robots, autonomous vehicles, and embodied agents.
• Collaborate with other research team members and a diverse set of internal product teams.
• Have a broader impact through transfer and/or open-source of the technology you've developed to relevant product groups.
What We Need To See
• Currently pursuing a PhD degree in Computer Science, Electrical or Mechanical Engineering, or a related field
• Experience in one or more of the following areas:
Robot Manipulation
Robot Control
3D Computer Vision
Physics-Based Simulation
Human-Robot Interaction
Humanoid Robots
Embodied Agents
Large Vision and Language Models
Reinforcement Learning
Deep Understanding of Robot Kinematics, Dynamics, and Sensors
Autonomous Vehicles
Foundation Models
Self-Supervised Learning
Scenario Generation and Simulation
Verification and Validation of Safety-Critical AI Systems
• Programming experience and proficiency in Python (main), C, or C++
• Experience with CUDA and deep learning frameworks (i.e. TensorFlow or PyTorch)
• Strong background in research with publications from top robotics and AI conferences (i.e. RSS, CORL, ICRA, CVPR, NeurIPS)
• Excellent communication skills
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Intelligent machines powered by Artificial Intelligence that can learn, reason, and interact with people are no longer science fiction. An AI-powered robot can build 3D maps of its environment, detect and track real-life objects, learn from simulated environments, and understand language commands. This is truly an extraordinary time — the era of AI has begun. NVIDIA Research has several teams looking for world-class Research Interns including, but not limited to, the Robotics Research Lab in Seattle, Washington, and Santa Clara, California, the Generalist Embodied Agent Research (GEAR) group, and the Autonomous Vehicles Research Group.
The Robotics Research Lab is passionate about enabling robots to reach human-level dexterity, perception, and adaptability. We are a diverse and interdisciplinary team to work on core robotics topics ranging from control and perception to task planning and critical areas related to Sim2Real and large vision-language-action models. Our interns have the opportunity to publish original research.
The GEAR group collaborative research team that consistently produces influential works on multimodal foundation models, large-scale robot learning, game AI, and physical simulation. Our past projects include Eureka, VIMA, Voyager, MineDojo, MimicPlay, Prismer, and more. One of our team’s most recent milestones includes Project GR00T, a foundation model for humanoid robots. Your contributions will have a significant impact on our moonshot research projects and product roadmaps.
The Autonomous Vehicle Research Group brings together a diverse and interdisciplinary set of researchers to address core topics in vehicle autonomy ranging from perception, prediction, planning and control, to long-tail generalization and robustness, as well as advance the state of the art in a number of critical related fields such as foundation models, self-supervised learning, scenario generation and simulation, decision making under uncertainty, and the verification and validation of safety-critical AI systems.
What You Will Be Doing
• Work with experts in robotics and learning to define your research project.
• Design and implement advanced techniques for robot perception, planning, and control, as well as new AI models for humanoid robots, autonomous vehicles, and embodied agents.
• Collaborate with other research team members and a diverse set of internal product teams.
• Have a broader impact through transfer and/or open-source of the technology you've developed to relevant product groups.
What We Need To See
• Currently pursuing a PhD degree in Computer Science, Electrical or Mechanical Engineering, or a related field
• Experience in one or more of the following areas:
Robot Manipulation
Robot Control
3D Computer Vision
Physics-Based Simulation
Human-Robot Interaction
Humanoid Robots
Embodied Agents
Large Vision and Language Models
Reinforcement Learning
Deep Understanding of Robot Kinematics, Dynamics, and Sensors
Autonomous Vehicles
Foundation Models
Self-Supervised Learning
Scenario Generation and Simulation
Verification and Validation of Safety-Critical AI Systems
• Programming experience and proficiency in Python (main), C, or C++
• Experience with CUDA and deep learning frameworks (i.e. TensorFlow or PyTorch)
• Strong background in research with publications from top robotics and AI conferences (i.e. RSS, CORL, ICRA, CVPR, NeurIPS)
• Excellent communication skills
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
PhD Research Intern, Computer Vision and Deep Learning
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Computer Vision and Deep Learning Research focused Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA pioneered accelerated computing to tackle some of the world's hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create. You will be part of an amazing collaborative research team that consistently publishes at the top venues.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Computer Vision and Deep Learning Research teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
What You'll Be Doing
• Design and implement novel computer vision and machine learning methods.
• Collaborate with other team members, teams, and/or external researchers.
• Transfer your research to product groups to enable new products or types of products.
• Deliverable results include prototypes, patents, and products.
What We Need To See
• Currently pursuing a PhD degree in relevant discipline(s) (CS, CE, EE, Physics, Math)
• Research experience in at least one of the following areas:
3D Human Reconstruction
Generative Modeling of Humans
Human avatar creation, Gaussian Avatar, Neural Human Avatars, Text to Avatar Generation
Digital Twins
Diffusion Models
Generative World Models
Physics-Based Clothing/Hair/Body Simulation
Neural Radiance Field
Novel View Synthesis
Efficient Deep Learning
Reinforcement Learning
Autonomous Vehicles
Foundation Models
Self-Supervised Learning
Scenario Generation and Simulation
Verification and Validation of Safety-Critical AI Systems
• Excellent programming skills in some rapid prototyping environment such as Python; C++ and parallel programming (e.g., CUDA) is a plus
• Knowledge of common machine learning frameworks, such as PyTorch
• Strong research track record and publication record at top-tier conferences
• Excellent communication skills
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA pioneered accelerated computing to tackle some of the world's hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create. You will be part of an amazing collaborative research team that consistently publishes at the top venues.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Computer Vision and Deep Learning Research teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
What You'll Be Doing
• Design and implement novel computer vision and machine learning methods.
• Collaborate with other team members, teams, and/or external researchers.
• Transfer your research to product groups to enable new products or types of products.
• Deliverable results include prototypes, patents, and products.
What We Need To See
• Currently pursuing a PhD degree in relevant discipline(s) (CS, CE, EE, Physics, Math)
• Research experience in at least one of the following areas:
3D Human Reconstruction
Generative Modeling of Humans
Human avatar creation, Gaussian Avatar, Neural Human Avatars, Text to Avatar Generation
Digital Twins
Diffusion Models
Generative World Models
Physics-Based Clothing/Hair/Body Simulation
Neural Radiance Field
Novel View Synthesis
Efficient Deep Learning
Reinforcement Learning
Autonomous Vehicles
Foundation Models
Self-Supervised Learning
Scenario Generation and Simulation
Verification and Validation of Safety-Critical AI Systems
• Excellent programming skills in some rapid prototyping environment such as Python; C++ and parallel programming (e.g., CUDA) is a plus
• Knowledge of common machine learning frameworks, such as PyTorch
• Strong research track record and publication record at top-tier conferences
• Excellent communication skills
NVIDIA is widely considered one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world. Are you a creative and collaborative researcher with a real passion for computer graphics? If so, we want to hear from you!
The hourly rate for our interns is 30 USD – 90 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
NVIDIA 2025 Internships: Artificial Intelligence and Deep Learning
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Artificial Intelligence or Deep Learning Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Artificial Intelligence and Deep Learning teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
Potential Internships in this field include:
• Autonomous Vehicles
– Developing and training state-of-the-art Deep Neural Networks for path generation
– Collecting training datasets and real-time inference run-times using simulators/gyms as well as performing in-vehicle tests
Course or internship experience related to the following areas could be required: Computer Vision, Mapping, Localization, SLAM, Image Processing, Segmentation.
• Deep Learning Applications & Algorithms
– Developing algorithms for deep learning, data analytics, or scientific computing to improve performance of GPU implementations
Course or internship experience related to the following areas could be required: Deep Neural Networks, Linear Algebra, Numerical Methods and/or Computer Vision, Software Design, Computer Memory (Disk, Memory, Caches), CPU and GPU Architectures, Networking, Numeric Libraries, Embedded System Design and Development, Drivers, Real-Time Software.
• Deep Learning Frameworks & Libraries
– Building underlying frameworks and libraries to accelerate Deep Learning on GPUs
– Contributing directly to software packages such as JAX, PyTorch, and TensorFlow, integrating the latest library (e.g., cuDNN) or CUDA features, performance tuning, and analysis
– Optimizing core deep learning algorithms and libraries (e.g., CuDNN, CuBLAS), maintaining build, test, and distribution infrastructure for these libraries and deep learning frameworks on NVIDIA-supported platforms
Course or internship experience related to the following areas could be required: Computer Architecture (CPUs, GPUs, FPGAs or other accelerators), GPU Programming Models, Performance-Oriented Parallel Programming, Optimizing for High-Performance Computing (HPC), Algorithms, Numerical Methods.
• Robotics
– Building the fundamental infrastructure and software platforms of our system, working at the very heart of the software system, which will power every robot and application built with Isaac
Course or internship experience related to the following areas could be required: Robotics, Autonomous Vehicles, Validation Frameworks for Machine Learning/Deep Learning, Operating Systems and Data Structures (threads, processes, memory, synchronization), Physics Simulation, Simulators, Computer Graphics, Version Control, Computer Vision, Cloud Technologies.
• Machine Learning
– Developing and maintaining the first-generation MLaaS (Machine Learning as a Service) Platform including data ingestion, data indexing, data labeling, visualization, dashboards, and data viewers
Course or internship experience related to the following areas could be required: Machine Learning, Deep Learning, Accelerated Computing, GPU Computing, Deep Learning Frameworks, NVIDIA RAPIDS.
What We Need To See
• Currently pursuing a bachelor's, master's, or PhD degree within Electrical Engineering, Computer Engineering, Computer Science, Artificial Intelligence or a related field.
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: C, C++, CUDA, Python, x86, ARM CPU, GPU, Linux, Direct3D, Vulkan, OpenGL, OpenCL, Spark, Perl, Bash/Shell Scripting, Container Tools (Docker/Containers, Kubernetes), Infrastructure Platforms (AWS, Azure, GCP), Data Technologies (Kafka, ELK, Cassandra, Apache Spark), React, Go.
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Artificial Intelligence and Deep Learning teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
Potential Internships in this field include:
• Autonomous Vehicles
– Developing and training state-of-the-art Deep Neural Networks for path generation
– Collecting training datasets and real-time inference run-times using simulators/gyms as well as performing in-vehicle tests
Course or internship experience related to the following areas could be required: Computer Vision, Mapping, Localization, SLAM, Image Processing, Segmentation.
• Deep Learning Applications & Algorithms
– Developing algorithms for deep learning, data analytics, or scientific computing to improve performance of GPU implementations
Course or internship experience related to the following areas could be required: Deep Neural Networks, Linear Algebra, Numerical Methods and/or Computer Vision, Software Design, Computer Memory (Disk, Memory, Caches), CPU and GPU Architectures, Networking, Numeric Libraries, Embedded System Design and Development, Drivers, Real-Time Software.
• Deep Learning Frameworks & Libraries
– Building underlying frameworks and libraries to accelerate Deep Learning on GPUs
– Contributing directly to software packages such as JAX, PyTorch, and TensorFlow, integrating the latest library (e.g., cuDNN) or CUDA features, performance tuning, and analysis
– Optimizing core deep learning algorithms and libraries (e.g., CuDNN, CuBLAS), maintaining build, test, and distribution infrastructure for these libraries and deep learning frameworks on NVIDIA-supported platforms
Course or internship experience related to the following areas could be required: Computer Architecture (CPUs, GPUs, FPGAs or other accelerators), GPU Programming Models, Performance-Oriented Parallel Programming, Optimizing for High-Performance Computing (HPC), Algorithms, Numerical Methods.
• Robotics
– Building the fundamental infrastructure and software platforms of our system, working at the very heart of the software system, which will power every robot and application built with Isaac
Course or internship experience related to the following areas could be required: Robotics, Autonomous Vehicles, Validation Frameworks for Machine Learning/Deep Learning, Operating Systems and Data Structures (threads, processes, memory, synchronization), Physics Simulation, Simulators, Computer Graphics, Version Control, Computer Vision, Cloud Technologies.
• Machine Learning
– Developing and maintaining the first-generation MLaaS (Machine Learning as a Service) Platform including data ingestion, data indexing, data labeling, visualization, dashboards, and data viewers
Course or internship experience related to the following areas could be required: Machine Learning, Deep Learning, Accelerated Computing, GPU Computing, Deep Learning Frameworks, NVIDIA RAPIDS.
What We Need To See
• Currently pursuing a bachelor's, master's, or PhD degree within Electrical Engineering, Computer Engineering, Computer Science, Artificial Intelligence or a related field.
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: C, C++, CUDA, Python, x86, ARM CPU, GPU, Linux, Direct3D, Vulkan, OpenGL, OpenCL, Spark, Perl, Bash/Shell Scripting, Container Tools (Docker/Containers, Kubernetes), Infrastructure Platforms (AWS, Azure, GCP), Data Technologies (Kafka, ELK, Cassandra, Apache Spark), React, Go.
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
NVIDIA 2025 Internships: Deep Learning Computer Architecture
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Deep Learning Computer Architecture Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry-leading Deep Learning Computer Architecture teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
What We Need To See
• Course or internship experience related to the following areas could be required:
Computer Architecture
Deep Learning or Machine Learning
GPU computing and parallel programming
Performance modeling, profiling, optimizing, and/or analysis
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: C, C++, Python, Perl, GPU Computing (CUDA, OpenCL, OpenACC), Deep Learning Frameworks (PyTorch, TensorFlow, Caffe), HPC (MPI, OpenMP).
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry-leading Deep Learning Computer Architecture teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
What We Need To See
• Course or internship experience related to the following areas could be required:
Computer Architecture
Deep Learning or Machine Learning
GPU computing and parallel programming
Performance modeling, profiling, optimizing, and/or analysis
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: C, C++, Python, Perl, GPU Computing (CUDA, OpenCL, OpenACC), Deep Learning Frameworks (PyTorch, TensorFlow, Caffe), HPC (MPI, OpenMP).
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
NVIDIA 2025 Internships: Systems Software Engineering
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Systems Software Engineering Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Systems Software teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
Potential Internships in this field include:
Systems Software
• Defining, designing, and developing integrated (e.g., Jetson Orin) and discrete (e.g., Hopper H100) GPU system software components (e.g., runtime, math libraries) with focus on power and performance, as well as creating architecture and design specifications
Course or internship experience related to the following areas could be required: Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Multithreaded Debugging, Linux Kernel Development, RTOS Development on Embedded Platforms, Data Structures & Algorithm time/space complexity.
Graphics Systems Software
• Designing and implementing of OpenGL, OpenGL ES, and Vulkan graphics drivers, platform support, and conformance tests to support new hardware features in collaboration with other software, hardware, architecture, and support teams
• Training and debugging various issues within the Tegra graphics software stack
Course or internship experience related to the following areas could be required: Computer Architecture, Operating Systems, Real-Time Systems Development, Device Driver Programming, Game Console Middleware, or other Low-Level Library Development, 3D/2D Graphics Theory, Implementation & Optimizations, Simulation or Emulation (writing & debugging tests).
Compiler
• Working at the center of deep-learning compiler technology, spanning architecture design and support through functional languages
• Investigating problems or optimization opportunities within the Compiler backend by working with global compiler, hardware, and application teams to oversee improvements and problem resolutions
Course or internship experience related to the following areas could be required: Compiler Development, Open Source Programming, High-Performance Computing (HPC).
Firmware & Embedded Software
• Supporting development of firmware run on embedded microcontrollers within GPUs, while optimizing software to improve system robustness, performance, and security
• Participating in testing new and existing firmware, and developing tools and infrastructure to improve our front-end design and verification process
Course or internship experience related to the following areas could be required: Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Embedded Systems Software Development, Data Structures & Algorithms, Computer Architecture, Computer Systems Software, Linux Kernel Development, Multi-Threaded or Multi-Process Programming, RTOS Development on Embedded Platforms.
Software Security
• Hardening and developing secure solutions across the software stack, spanning multi-node supercomputers down to microcontrollers and security co-processors
• Building tools and infrastructure to scale security efforts across large organizations and codebases with millions of lines of code
Course or internship experience related to the following areas could be required: Operating Systems, Data Structures & Algorithms, Cybersecurity, Cryptography, Computer Systems Architecture, Microcontroller and Microprocessor fundamentals (Caches, Buses, Memory Controllers, DMA, etc.).
What We Need To See
• Currently pursuing a bachelor's, master's, or PhD degree within Computer Engineering, Electrical Engineering, Computer Science, or a related field
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies:
– C, C++, CUDA, x86, ARM CPU, GPU, Linux, Perl, Bash/Shell Scripting
– Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Formal Verification Tools (Spark, Frama-C), Linux Kernel Development, Multi-Threaded or Multi-Process Programming, Open Source Tools (CLANG, LLBM, gcc), Testing Production/Automation Tools (XLA, TVM, Halide), Microprocessor Fundamentals (Caches, Buses, Memory Controllers, DMA, etc.)
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Systems Software teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the minimum 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
Potential Internships in this field include:
Systems Software
• Defining, designing, and developing integrated (e.g., Jetson Orin) and discrete (e.g., Hopper H100) GPU system software components (e.g., runtime, math libraries) with focus on power and performance, as well as creating architecture and design specifications
Course or internship experience related to the following areas could be required: Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Multithreaded Debugging, Linux Kernel Development, RTOS Development on Embedded Platforms, Data Structures & Algorithm time/space complexity.
Graphics Systems Software
• Designing and implementing of OpenGL, OpenGL ES, and Vulkan graphics drivers, platform support, and conformance tests to support new hardware features in collaboration with other software, hardware, architecture, and support teams
• Training and debugging various issues within the Tegra graphics software stack
Course or internship experience related to the following areas could be required: Computer Architecture, Operating Systems, Real-Time Systems Development, Device Driver Programming, Game Console Middleware, or other Low-Level Library Development, 3D/2D Graphics Theory, Implementation & Optimizations, Simulation or Emulation (writing & debugging tests).
Compiler
• Working at the center of deep-learning compiler technology, spanning architecture design and support through functional languages
• Investigating problems or optimization opportunities within the Compiler backend by working with global compiler, hardware, and application teams to oversee improvements and problem resolutions
Course or internship experience related to the following areas could be required: Compiler Development, Open Source Programming, High-Performance Computing (HPC).
Firmware & Embedded Software
• Supporting development of firmware run on embedded microcontrollers within GPUs, while optimizing software to improve system robustness, performance, and security
• Participating in testing new and existing firmware, and developing tools and infrastructure to improve our front-end design and verification process
Course or internship experience related to the following areas could be required: Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Embedded Systems Software Development, Data Structures & Algorithms, Computer Architecture, Computer Systems Software, Linux Kernel Development, Multi-Threaded or Multi-Process Programming, RTOS Development on Embedded Platforms.
Software Security
• Hardening and developing secure solutions across the software stack, spanning multi-node supercomputers down to microcontrollers and security co-processors
• Building tools and infrastructure to scale security efforts across large organizations and codebases with millions of lines of code
Course or internship experience related to the following areas could be required: Operating Systems, Data Structures & Algorithms, Cybersecurity, Cryptography, Computer Systems Architecture, Microcontroller and Microprocessor fundamentals (Caches, Buses, Memory Controllers, DMA, etc.).
What We Need To See
• Currently pursuing a bachelor's, master's, or PhD degree within Computer Engineering, Electrical Engineering, Computer Science, or a related field
• Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies:
– C, C++, CUDA, x86, ARM CPU, GPU, Linux, Perl, Bash/Shell Scripting
– Operating Systems (Threads, Process Control, Memory/Resource Management, Virtual Memory), Formal Verification Tools (Spark, Frama-C), Linux Kernel Development, Multi-Threaded or Multi-Process Programming, Open Source Tools (CLANG, LLBM, gcc), Testing Production/Automation Tools (XLA, TVM, Halide), Microprocessor Fundamentals (Caches, Buses, Memory Controllers, DMA, etc.)
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
NVIDIA 2025 Internships: Computer Architecture
·
NVIDIA
·
Santa Clara, CA
Hide Details
SessionJob Postings
DescriptionBy submitting your résumé, you’re expressing interest in one of our 2025 Computer Architecture Internships. We’ll review résumés on an ongoing basis, and a recruiter may reach out if your experience fits one of our many internship opportunities.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Computer Architecture teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
What We Need To See
Course or internship experience related to the following areas could be required:
• Computer Architecture experience in one or more of these focus areas: Computer Graphics, Deep Learning, Ray Tracing, Parallel Programming, Memory Architecture, or High-Performance Computing Systems
• Digital Systems, VLSI Design, GPU or CPU Architecture, Computer Arithmetic, CMOS Transistors and Circuits
• Deep Learning, Modelling/Performance Analysis, Parallel Programming
Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: Verilog, SystemVerilog, VHDL, Linux, C, C++, Perl, Modern Graphics APIs (DirectX, OpenGL, Vulkan), GPU Computing (CUDA, OpenCL), Revision Control (Perforce, Git), HPC (MPI, OpenMP).
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society — from gaming to robotics, self-driving cars to life-saving healthcare, climate change to virtual worlds where we can all connect and create.
Our internships offer an excellent opportunity to expand your career and get hands on with one of our industry leading Computer Architecture teams. We’re seeking strategic, ambitious, hard-working, and creative individuals who are passionate about helping us tackle challenges no one else can solve.
Throughout the 12-week internship, students will work on projects that have a measurable impact on our business. We’re looking for students pursuing bachelor's, master's, or PhD degrees within a relevant or related field.
What We Need To See
Course or internship experience related to the following areas could be required:
• Computer Architecture experience in one or more of these focus areas: Computer Graphics, Deep Learning, Ray Tracing, Parallel Programming, Memory Architecture, or High-Performance Computing Systems
• Digital Systems, VLSI Design, GPU or CPU Architecture, Computer Arithmetic, CMOS Transistors and Circuits
• Deep Learning, Modelling/Performance Analysis, Parallel Programming
Depending on the internship role, prior experience or knowledge requirements could include the following programming skills and technologies: Verilog, SystemVerilog, VHDL, Linux, C, C++, Perl, Modern Graphics APIs (DirectX, OpenGL, Vulkan), GPU Computing (CUDA, OpenCL), Revision Control (Perforce, Git), HPC (MPI, OpenMP).
Click here to learn more about NVIDIA, our early talent programs, benefits offered to students and other helpful student resources related to our latest technologies and endeavors.
The hourly rate for our interns is 18 USD – 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.
You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
·
·
2024-10-27
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
USA
NVIDIA
In-person
Remote
Full Time
Software Developer 5 - Strategic Customers
·
Oracle
·
Remote - USA
Hide Details
SessionJob Postings
DescriptionAs an AI/ML Infrastructure Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure that supports our AI and machine learning initiatives. You will work closely with data scientists, software engineers, and IT professionals to ensure that our AI/ML models are deployed efficiently, securely, and at scale. Your expertise will be crucial in optimizing our infrastructure for performance, reliability, and cost-effectiveness.
Career Level - IC5
Career Level - IC5
RequirementsResponsibilities:
Take ownership of problems and work to identify solutions.
Ability to think through the solution and identify/document potential issues impacting your customers.
Design, deploy, and manage infrastructure components such as cloud resources, distributed computing systems, and data storage solutions to support AI/ML workflows.
Collaborate with scientists and software/infrastructure engineers to understand infrastructure requirements for training, testing, and deploying machine learning models.
Implement automation solutions for provisioning, configuring, and monitoring AI/ML infrastructure to streamline operations and enhance productivity.
Optimize infrastructure performance by tuning parameters, optimizing resource utilization, and implementing caching and data pre-processing techniques.
Ensure security and compliance standards are met throughout the AI/ML infrastructure stack, including data encryption, access control, and vulnerability management.
Troubleshoot infrastructure performance, scalability, and reliability issues and implement solutions to mitigate risks and minimize downtime.
Stay updated on emerging technologies and best practices in AI/ML infrastructure and evaluate their potential impact on our systems and workflows.
Document infrastructure designs, configurations, and procedures to facilitate knowledge sharing and ensure maintainability.
Qualifications:
Experience in scripting and automation using tools like Ansible, Terraform, and/or Kubernetes.
Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for managing distributed systems.
Solid understanding of networking concepts, security principles, and best practices.
Excellent problem-solving skills, with the ability to troubleshoot complex issues and drive resolution in a fast-paced environment.
Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams and convey technical concepts to non-technical stakeholders.
Strong documentation skills with experience documenting infrastructure designs, configurations, procedures, and troubleshooting steps to facilitate knowledge sharing, ensure maintainability, and enhance team collaboration.
Strong Linux skills with hands-on experience in Oracle Linux/RHEL/CentOS, Ubuntu, and Debian distributions, including system administration, package management, shell scripting, and performance optimization.
Preferred Qualifications
Strong proficiency in at least one of the programming languages such as Python, Rust, Go, Java, or Scala
Proven experience designing, implementing, and managing infrastructure for AI/ML or HPC workloads.
Understanding machine learning frameworks and libraries such as TensorFlow, PyTorch, or sci-kit-learn and their deployment in production environments is a plus.
Familiarity with DevOps practices and tools for continuous integration, deployment, and monitoring (e.g., Jenkins, GitLab CI/CD, Prometheus).
Strong experience with High-Performance Computing systems
Qualifications
Disclaimer:
Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.
Range and benefit information provided in this posting are specific to the stated locations only
US: Hiring Range: from $96,800 to $251,600 per annum. May be eligible for bonus, equity, and compensation deferral.
Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business.
Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
Oracle US offers a comprehensive benefits package which includes the following:
1. Medical, dental, and vision insurance, including expert medical opinion
2. Short term disability and long term disability
3. Life insurance and AD&D
4. Supplemental life insurance (Employee/Spouse/Child)
5. Health care and dependent care Flexible Spending Accounts
6. Pre-tax commuter and parking benefits
7. 401(k) Savings and Investment Plan with company match
8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
9. 11 paid holidays
10. Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
11. Paid parental leave
12. Adoption assistance
13. Employee Stock Purchase Plan
14. Financial planning and group legal
15. Voluntary benefits including auto, homeowner and pet insurance
The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.
Company DescriptionAs a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s problems. True innovation starts with diverse perspectives and various abilities and backgrounds.
When everyone’s voice is heard, we’re inspired to go beyond what’s been done before. It’s why we’re committed to expanding our inclusive workforce that promotes diverse insights and perspectives.
We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer a highly competitive suite of employee benefits designed on the principles of parity and consistency. We put our people first with flexible medical, life insurance and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by calling +1 888 404 2494, option one.
Disclaimer:
Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
* Which includes being a United States Affirmative Action Employer
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Principal Member of Technical Staff - Strategic Customers
·
Oralce
·
USA
Hide Details
SessionJob Postings
DescriptionAs an AI/ML Infrastructure Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure that supports our AI and machine learning initiatives. You will work closely with data scientists, software engineers, and IT professionals to ensure that our AI/ML models are deployed efficiently, securely, and at scale. Your expertise will be crucial in optimizing our infrastructure for performance, reliability, and cost-effectiveness.
Career Level - IC4
Career Level - IC4
RequirementsResponsibilities:
Take ownership of problems and work to identify solutions.
Ability to think through the solution and identify/document potential issues impacting your customers.
Design, deploy, and manage infrastructure components such as cloud resources, distributed computing systems, and data storage solutions to support AI/ML workflows.
Collaborate with scientists and software/infrastructure engineers to understand infrastructure requirements for training, testing, and deploying machine learning models.
Implement automation solutions for provisioning, configuring, and monitoring AI/ML infrastructure to streamline operations and enhance productivity.
Optimize infrastructure performance by tuning parameters, optimizing resource utilization, and implementing caching and data pre-processing techniques.
Ensure security and compliance standards are met throughout the AI/ML infrastructure stack, including data encryption, access control, and vulnerability management.
Troubleshoot infrastructure performance, scalability, and reliability issues and implement solutions to mitigate risks and minimize downtime.
Stay updated on emerging technologies and best practices in AI/ML infrastructure and evaluate their potential impact on our systems and workflows.
Document infrastructure designs, configurations, and procedures to facilitate knowledge sharing and ensure maintainability.
Qualifications:
Experience in scripting and automation using tools like Ansible, Terraform, and/or Kubernetes.
Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for managing distributed systems.
Solid understanding of networking concepts, security principles, and best practices.
Excellent problem-solving skills, with the ability to troubleshoot complex issues and drive resolution in a fast-paced environment.
Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams and convey technical concepts to non-technical stakeholders.
Strong documentation skills with experience documenting infrastructure designs, configurations, procedures, and troubleshooting steps to facilitate knowledge sharing, ensure maintainability, and enhance team collaboration.
Strong Linux skills with hands-on experience in Oracle Linux/RHEL/CentOS, Ubuntu, and Debian distributions, including system administration, package management, shell scripting, and performance optimization.
Preferred Qualifications
Strong proficiency in at least one of the programming languages such as Python, Rust, Go, Java, or Scala
Proven experience designing, implementing, and managing infrastructure for AI/ML or HPC workloads.
Understanding machine learning frameworks and libraries such as TensorFlow, PyTorch, or sci-kit-learn and their deployment in production environments is a plus.
Familiarity with DevOps practices and tools for continuous integration, deployment, and monitoring (e.g., Jenkins, GitLab CI/CD, Prometheus).
Strong experience with High-Performance Computing systems
Disclaimer:
Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.
Range and benefit information provided in this posting are specific to the stated locations only
US: Hiring Range in USD from: $109,200 to $178,700 per annum. May be eligible for bonus and equity.
Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business.
Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
Oracle US offers a comprehensive benefits package which includes the following:
1. Medical, dental, and vision insurance, including expert medical opinion
2. Short term disability and long term disability
3. Life insurance and AD&D
4. Supplemental life insurance (Employee/Spouse/Child)
5. Health care and dependent care Flexible Spending Accounts
6. Pre-tax commuter and parking benefits
7. 401(k) Savings and Investment Plan with company match
8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
9. 11 paid holidays
10. Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
11. Paid parental leave
12. Adoption assistance
13. Employee Stock Purchase Plan
14. Financial planning and group legal
15. Voluntary benefits including auto, homeowner and pet insurance
The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.
Company DescriptionAs a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s problems. True innovation starts with diverse perspectives and various abilities and backgrounds.
When everyone’s voice is heard, we’re inspired to go beyond what’s been done before. It’s why we’re committed to expanding our inclusive workforce that promotes diverse insights and perspectives.
We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer a highly competitive suite of employee benefits designed on the principles of parity and consistency. We put our people first with flexible medical, life insurance and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by calling +1 888 404 2494, option one.
Disclaimer:
Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
* Which includes being a United States Affirmative Action Employer
·
·
Event Type
Job Posting
TimeTuesday, 19 November 202410:30am - 3pm EST
LocationExhibit Hall A3 - Job Fair Inside
TP
W
TUT
XO/EX
Principal Member of Technical Staff - Strategic Customers 2
·
Oracle
·
USA
Hide Details
SessionJob Postings
DescriptionJob Description: As an AI/ML Infrastructure Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure that supports our AI and machine learning initiatives. You will work closely with data scientists, software engineers, and IT professionals to ensure that our AI/ML models are deployed efficiently, securely, and at scale. Your expertise will be crucial in optimizing our infrastructure for performance, reliability, and cost-effectiveness.
Career Level - IC4
Career Level - IC4
RequirementsResponsibilities:
Take ownership of problems and work to identify solutions.
Ability to think through the solution and identify/document potential issues impacting your customers.
Design, deploy, and manage infrastructure components such as cloud resources, distributed computing systems, and data storage solutions to support AI/ML workflows.
Collaborate with scientists and software/infrastructure engineers to understand infrastructure requirements for training, testing, and deploying machine learning models.
Implement automation solutions for provisioning, configuring, and monitoring AI/ML infrastructure to streamline operations and enhance productivity.
Optimize infrastructure performance by tuning parameters, optimizing resource utilization, and implementing caching and data pre-processing techniques.
Ensure security and compliance standards are met throughout the AI/ML infrastructure stack, including data encryption, access control, and vulnerability management.
Troubleshoot infrastructure performance, scalability, and reliability issues and implement solutions to mitigate risks and minimize downtime.
Stay updated on emerging technologies and best practices in AI/ML infrastructure and evaluate their potential impact on our systems and workflows.
Document infrastructure designs, configurations, and procedures to facilitate knowledge sharing and ensure maintainability.
Qualifications:
Experience in scripting and automation using tools like Ansible, Terraform, and/or Kubernetes.
Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for managing distributed systems.
Solid understanding of networking concepts, security principles, and best practices.
Excellent problem-solving skills, with the ability to troubleshoot complex issues and drive resolution in a fast-paced environment.
Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams and convey technical concepts to non-technical stakeholders.
Strong documentation skills with experience documenting infrastructure designs, configurations, procedures, and troubleshooting steps to facilitate knowledge sharing, ensure maintainability, and enhance team collaboration.
Strong Linux skills with hands-on experience in Oracle Linux/RHEL/CentOS, Ubuntu, and Debian distributions, including system administration, package management, shell scripting, and performance optimization.
Preferred Qualifications
Strong proficiency in at least one of the programming languages such as Python, Rust, Go, Java, or Scala
Proven experience designing, implementing, and managing infrastructure for AI/ML or HPC workloads.
Understanding machine learning frameworks and libraries such as TensorFlow, PyTorch, or sci-kit-learn and their deployment in production environments is a plus.
Familiarity with DevOps practices and tools for continuous integration, deployment, and monitoring (e.g., Jenkins, GitLab CI/CD, Prometheus).
Strong experience with High-Performance Computing systems
Qualifications
Disclaimer:
Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.
Range and benefit information provided in this posting are specific to the stated locations only
US: Hiring Range in USD from: $109,200 to $178,700 per annum. May be eligible for bonus and equity.
Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business.
Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
Oracle US offers a comprehensive benefits package which includes the following:
1. Medical, dental, and vision insurance, including expert medical opinion
2. Short term disability and long term disability
3. Life insurance and AD&D
4. Supplemental life insurance (Employee/Spouse/Child)
5. Health care and dependent care Flexible Spending Accounts
6. Pre-tax commuter and parking benefits
7. 401(k) Savings and Investment Plan with company match
8. Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non-overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
9. 11 paid holidays
10. Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
11. Paid parental leave
12. Adoption assistance
13. Employee Stock Purchase Plan
14. Financial planning and group legal
15. Voluntary benefits including auto, homeowner and pet insurance
The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.
Company DescriptionAs a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s problems. True innovation starts with diverse perspectives and various abilities and backgrounds.
When everyone’s voice is heard, we’re inspired to go beyond what’s been done before. It’s why we’re committed to expanding our inclusive workforce that promotes diverse insights and perspectives.
We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer a highly competitive suite of employee benefits designed on the principles of parity and consistency. We put our people first with flexible medical, life insurance and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by calling +1 888 404 2494, option one.
Disclaimer:
Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
* Which includes being a United States Affirmative Action Employer
·