DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 8, 16, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 8 recites:
The autonomous or semi-autonomous machine of claim 7, wherein
at least one passenger of the one or more passengers is picked up based at least on identifying the at least one passenger using a first subset of the sensor data obtained using the one or more first sensors, and the at least one passenger is monitored between pick up and drop off using a second subset of the sensor data obtained using the one or more second sensors.
The original disclosure does not teach or suggest “a first subset of the sensor data” nor “a second subset of the sensor data”. Rather, paragraph 0038 of the filed specification recites “first sensors” that monitors the outside of the host vehicle and “second sensors” that monitor the inside. This monitoring is reasonably performed using the sensor data obtained using the one or more first sensors, but a first subset of this data is not taught in the original disclosure, so far as the examiner can tell. Paragraph 0039 teaches that the system can “use the first sensors to recognize a gesture the passenger makes to signal that the passenger wishes to use the vehicle.” Paragraph 00130 teaches that if a vehicle has “already picked up all passengers who are waited at the stop, the vehicle may delay moving to the next stop if an additional passenger is running toward the vehicle waving her arms to signal that the vehicle should wait because she wants to get on board.” Paragraph 00131 teaches that the “outward facing sensors,” which are analogous to the “first sensors” in the present claim, can track the person running to the door of the vehicle and the controller can open the door for the person. Paragraph 00148 teaches recognizing a hailing motion as indicative that a person wants a ride on the shuttle. Paragraph 00221 also teaches that a shuttle “may use facial recognition technology to identify the presence and arrival at Shuttle 1 of the requesting passenger and may open the shuttle door upon identifying client 1.” Thus the “identifying the at least one passenger” using the first sensors can mean at least identifying a potential passengers ride-hailing motion or face. That has written description.
The present specification teaches that “one passenger is monitored between pick up and drop off” as an effect of the fact that the internal camera monitors the interior to the cabin. The disclosure does not teach or suggest that the interior camera turns off when the passenger is dropped off and there are no more passengers, or anything along those lines. Paragraph 0033 teaches that “once on board,” the interior cameras are able to detect what display screen a passenger is looking at and generate content accordingly, such as visual reminders for a passenger to exit and their stop. Paragraph 0049 teaches that the “second sensors may be configured to simultaneously sense activities of multiple passengers within the passenger space.” Paragraph 00131 teaches a passenger might run to catch the shuttle, and then “once on board, the interior facing sensors observe the newly board passenger”. Paragraph 00243 teaches that the interior camera can also use facial recognition and use it to “alert particular passengers when the vehicle has reached their particular destination.” Thus, the present specification teaches that the second sensors are active all the time. The second sensors can monitor a passenger from pick up to drop off in the sense that the second sensors monitor the passenger cabin and whoever happens to be in it. When the passenger exits the passenger cabin it is impossible for the second sensors to monitor that passenger any longer. Thus, when the claim reads “one passenger is monitored between pick up and drop off” that can broadly and reasonably be interpreted as: one passenger is monitored within the passenger space.
The examiner is weary of specification creep due to the present application containing multiple continuations. Yet the examiner will fully consider arguments that the first and second subset idea is, in fact, in the original disclosure. For now, claim 8 will be interpreted as follows, with the examiner’s deletions in double strike though:
The autonomous or semi-autonomous machine of claim 7, wherein
at least one passenger of the one or more passengers is picked up based at least on identifying the at least one passenger using
Claims 16 and 20 are similar in this language, rejected for the same reasons, and will be similarly interpreted.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 17 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Arditi et al. (US2019/0197430).
Regarding claim 17, Arditi discloses:
An autonomous or semi-autonomous machine (see Fig. 1 item 140 and the first sentence of paragraph 0019) comprising:
a propulsion system (see paragraph 0025 for an “engine”.);
a passenger space (see Fig. 1 and paragraph 0044 for a “passenger compartment of the vehicle”.);
one or more first sensors having fields of view or sensory fields outside of the autonomous or semi-autonomous machine (see paragraph 0104 for exterior sensors);
one or more second sensors having fields of view or sensory fields within the passenger space (see Fig. 2 and paragraph 0028 for a device 160 that includes “sensors for monitoring the passenger compartment of the vehicle”. The sensors can be cameras. See Fig. 4 and paragraph 0035 for sensors 401-408, including cameras, that are integrated into the vehicle and “capture full sensory data of passengers sitting in the front of the vehicle and also partial sensory data of passengers sitting in the back.”);
a computing system, the computing system including one or more central processing units (CPU) and one or more graphics processing units (GPUs) capable of massively parallel processing (see paragraph 0109 for a system-on-chip (SOC). See paragraph 0105 for using CPUs and GPUs), wherein:
the computing system is to perform one or more operations associated with control of the autonomous or semi-autonomous machine based at least on processing sensor data obtained using at least one sensor of the one (see Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data from lidar or GPS for capturing an environment of the vehicle.) or more first sensors or the one or more second sensors (see Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data of the “ride requestor….seated in the vehicle.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 7, 9, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Arditi et al. (US2019/0197430) in view of Lie et al. (US2018/0314941).
Regarding claim 1, Arditi teaches:
An autonomous or semi-autonomous machine (see Fig. 1 item 140 and the first sentence of paragraph 0019) comprising:
a propulsion system (see paragraph 0025 for an “engine”.);
a passenger space (see Fig. 1 and paragraph 0044 for a “passenger compartment of the vehicle”.);
one or more first sensors having fields of view or sensory fields outside of the autonomous or semi-autonomous machine (see paragraph 0104 for exterior sensors);
one or more second sensors having fields of view or sensory fields within the passenger space (see Fig. 2 and paragraph 0028 for a device 160 that includes “sensors for monitoring the passenger compartment of the vehicle”. The sensors can be cameras. See Fig. 4 and paragraph 0035 for sensors 401-408, including cameras, that are integrated into the vehicle and “capture full sensory data of passengers sitting in the front of the vehicle and also partial sensory data of passengers sitting in the back.”);
a computing system capable of performing massively parallel processing, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs) (see paragraph 0109 for a system-on-chip (SOC). See paragraph 0105 for using CPUs and GPUs.), and one or more hardware accelerators (note that Arditi does not teach this last clause.),
wherein the computing system is to perform one or more operations associated with control of the autonomous or semi-autonomous machine based at least on processing sensor data obtained using at least (see paragraph 0079 for determining that things outside or inside the vehicle are “normal” or not. If they are normal, the vehicle will drive autonomously. If not, the vehicle may alert a passenger that events in the compartment are recorded and may drive to the police station, according to paragraph 0082)
one sensor of the one or more first sensors (see Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data from lidar or GPS for capturing an environment of the vehicle.)
or the one or more second sensors (see Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data of the “ride requestor….seated in the vehicle.”).
Yet Arditi does not further teach:
a computing system capable of performing massively parallel processing, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators.
However, Lie teaches:
a computing system capable of performing massively parallel processing, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators (see paragraph 0070 for “using a deep learning accelerator”. See Fig. 1 and paragraph 0469 for using the system on an autonomous vehicle. See 0475 for system using a “deep learning accelerator 120” and GPUs. See paragraph 0740 for using GPUs or CPU. See paragraph 0087-0088 for using “wafer-scale integration” of “multiple elements in the system for “wafer-scale integration” instead of “inter-chip interconnect”. See paragraph 0799 for the operations disclosed by Lie being performed by a “system-on-a-chip”. (SoC).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi, to add the additional features of a computing system capable of performing massively parallel processing, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators, as taught by Lie. The motivation for doing so would be to “provide improvements in one or more of accuracy, performance, and energy efficiency,” as recognized by Lie (see paragraph 0065).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
In Summary, Arditi teaches everything Lie teaches except the one or more hardware accelerators. Lie teaches an autonomous vehicle which uses the disclosed computing system for at least sensor data processing.
See the teaching references in the “Additional Art” section of this detailed action for references published by Nvidia, the present applicant, stating that GPUs perform massively parallel processing by definition.
Regarding claim 2, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Arditi further teaches:
The autonomous or semi-autonomous machine of claim 1, wherein
the massively parallel processing is achievable using, at least, the one or more GPUs (see paragraph 0740 for using GPUs.).
See the teaching references in the “Additional Art” section of this detailed action for references published by Nvidia, the present applicant, stating that GPUs perform massively parallel processing by definition.
Regarding claim 3, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Arditi further teaches:
the autonomous or semi-autonomous machine is capable of level 3 autonomous vehicle functionality or greater (see paragraph 0019 for a vehicle that is “human-driven or autonomous). Since level 3 autonomy is low level autonomy and paragraph 0019 puts “human-driven” in contrast to autonomy, Arditi teaches at least level 3 autonomy. See paragraph 0039 for the vehicle being an “autonomous vehicle without a human driver”.).
Regarding claim 4, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Arditi further teaches:
The autonomous or semi-autonomous machine of claim 1, further comprising
one or more modems for wireless communication over one or more cellular networks, wherein data is received from one or more remote computing devices, via the one or more modems, to update one or more neural networks, one or more algorithms, or one or more maps stored on the autonomous or semi-autonomous machine (see paragraph 0102).
Regarding claim 7, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Arditi further teaches:
The autonomous or semi-autonomous machine of claim 1, wherein
the one or more operations associated with control of the autonomous or semi-autonomous machine include autonomously picking up and dropping off one or more passengers (see Fig. 6 for a system that matches a ride requestor to a ride provider and drives the requestor to their destination (YES out of 670).).
Regarding claim 9, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Yet Arditi does not further teach:
The autonomous or semi-autonomous machine of claim 1, wherein
the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator.
However, Lie teaches:
the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator (see paragraph 0070 for “using a deep learning accelerator”. See Fig. 1 and paragraph 0469 for using the system on an autonomous vehicle. See 0475 for system using a “deep learning accelerator 120” and GPUs. See paragraph 0740 for using GPUs or CPU. See paragraph 0087-0088 for using “wafer-scale integration” of “multiple elements in the system for “wafer-scale integration” instead of “inter-chip interconnect”. See paragraph 0799 for the operations disclosed by Lie being performed by a “system-on-a-chip”. (SoC).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi and Lie, to add the additional features of: the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator, as taught by Lie. The motivation for doing so would be to “provide improvements in one or more of accuracy, performance, and energy efficiency,” as recognized by Lie (see paragraph 0065).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 18, Arditi teaches the autonomous or semi-autonomous machine of claim 17.
Yet Arditi does not further teach:
The autonomous or semi-autonomous machine of claim 17, further comprising
one or more hardware accelerators, the one or more hardware accelerators including at least one of a deep learning accelerator (DLA) or a vision accelerator.
However Lie teaches:
one or more hardware accelerators, the one or more hardware accelerators including at least one of a deep learning accelerator (DLA) or a vision accelerator (see paragraph 0070 for “using a deep learning accelerator”. See Fig. 1 and paragraph 0469 for using the system on an autonomous vehicle. See 0475 for system using a “deep learning accelerator 120” and GPUs. See paragraph 0740 for using GPUs or CPU. See paragraph 0087-0088 for using “wafer-scale integration” of “multiple elements in the system for “wafer-scale integration” instead of “inter-chip interconnect”. See paragraph 0799 for the operations disclosed by Lie being performed by a “system-on-a-chip”. (SoC).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi, to add the additional features of: one or more hardware accelerators, the one or more hardware accelerators including at least one of a deep learning accelerator (DLA) or a vision accelerator, as taught by Lie. The motivation for doing so would be to “provide improvements in one or more of accuracy, performance, and energy efficiency,” as recognized by Lie (see paragraph 0065).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 19, the claim is substantially similar to claim 3. Please see the rejection for that claim.
Claims 5, 6, and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Arditi et al. (US2019/0197430) in view of Lie et al. (US2018/0314941) in further view of Frtunikj (DE102017214531A1).
Regarding claim 5, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 1.
Yet Arditi and Lie do not further teach:
The autonomous or semi-autonomous machine of claim 1, wherein
one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) B or greater.
However, Frtunikj teaches:
one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) B or greater (see page 3 of the attached English translation for an autonomous vehicle that can operate at “SAE stage 3-5” and does so with, “for example, the ASIL-D standard or…ASIL-C standard”.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi and Lie, to add the additional features of: one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) B or greater, as taught by Frtunikj. The motivation for doing so would be to have an autonomous vehicle that is “classified according to a given safety condition,” and recognized standard, as recognized by Frtunikj (see page 2)
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 6, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 5.
Yet Arditi and Lie do not further teach:
The autonomous or semi-autonomous machine of claim 5, wherein
one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) D.
However, Frtunikj teaches:
one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) D (see page 3 of the attached English translation for an autonomous vehicle that can operate at “SAE stage 3-5” and does so with, “for example, the ASIL-D standard or…ASIL-C standard”.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi, Lie, and Frtunikj to add the additional features of: one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) D, as taught by Frtunikj. The motivation for doing so would be to have an autonomous vehicle that is “classified according to a given safety condition,” and recognized standard, as recognized by Frtunikj (see page 2)
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 10, Arditi teaches:
An autonomous or semi-autonomous machine (see Fig. 1 item 140 and the first sentence of paragraph 0019) comprising:
a propulsion system (see paragraph 0025 for an “engine”.);
a passenger space (see Fig. 1 and paragraph 0044 for a “passenger compartment of the vehicle”.);
one or more first sensors having fields of view or sensory fields outside of the autonomous or semi-autonomous machine (see paragraph 0104 for exterior sensors);
one or more second sensors having fields of view or sensory fields within the passenger space (see Fig. 2 and paragraph 0028 for a device 160 that includes “sensors for monitoring the passenger compartment of the vehicle”. The sensors can be cameras. See Fig. 4 and paragraph 0035 for sensors 401-408, including cameras, that are integrated into the vehicle and “capture full sensory data of passengers sitting in the front of the vehicle and also partial sensory data of passengers sitting in the back.”);
a computing system, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs) (see paragraph 0109 for a system-on-chip (SOC). See paragraph 0105 for using CPUs and GPUs.), and one or more hardware accelerators (note that Arditi does not teach this last clause.), wherein:
the computing system is to perform one or more operations associated with control of the autonomous or semi-autonomous machine based at least on processing sensor data obtained using at least one sensor of the one or more first sensors or the one or more second sensors (see paragraph 0079 for determining that things outside or inside the vehicle are “normal” or not. If they are normal, the vehicle will drive autonomously. If not, the vehicle may alert a passenger that events in the compartment are recorded and may drive to the police station, according to paragraph 0082. See Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data from lidar or GPS for capturing an environment of the vehicle.);
the autonomous or semi-autonomous machine is capable of achieving level 3 autonomous vehicle functionality or greater (see paragraph 0019 for a vehicle that is “human-driven or autonomous). Since level 3 autonomy is low level autonomy and paragraph 0019 puts “human-driven” in contrast to autonomy, Arditi teaches at least level 3 autonomy. See paragraph 0039 for the vehicle being an “autonomous vehicle without a human driver”.).
Yet Arditi does not further teach:
a computing system, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators, and
at least one operation of the one or more operations is capable of satisfying ISO 26262 automotive safety integrity level (ASIL) D.
However, Lie teaches:
a computing system, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators (see paragraph 0070 for “using a deep learning accelerator”. See Fig. 1 and paragraph 0469 for using the system on an autonomous vehicle. See 0475 for system using a “deep learning accelerator 120” and GPUs. See paragraph 0740 for using GPUs or CPU. See paragraph 0087-0088 for using “wafer-scale integration” of “multiple elements in the system for “wafer-scale integration” instead of “inter-chip interconnect”. See paragraph 0799 for the operations disclosed by Lie being performed by a “system-on-a-chip”. (SoC).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi, to add the additional features of a computing system capable of performing massively parallel processing, the computing system including one or more systems-on-a-chip (SoCs), the one or more SoCs including one or more central processing units (CPU), one or more graphics processing units (GPUs), and one or more hardware accelerators, as taught by Lie. The motivation for doing so would be to “provide improvements in one or more of accuracy, performance, and energy efficiency,” as recognized by Lie (see paragraph 0065).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Yet Arditi and Lie do not explicitly further teach:
at least one operation of the one or more operations is capable of satisfying ISO 26262 automotive safety integrity level (ASIL) D.
However, Frtunikj teaches:
at least one operation of the one or more operations is capable of satisfying ISO 26262 automotive safety integrity level (ASIL) D (see page 3 of the attached English translation for an autonomous vehicle that can operate at “SAE stage 3-5” and does so with, “for example, the ASIL-D standard or…ASIL-C standard”.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi and Lie to add the additional features of: one or more operations of the autonomous or semi-autonomous machine are capable of achieving ISO 26262 automotive safety integrity level (ASIL) D, as taught by Frtunikj. The motivation for doing so would be to have an autonomous vehicle that is “classified according to a given safety condition” and recognized standard, as recognized by Frtunikj (see page 2)
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 11, Arditi, Lie, and Frtunikj teach the autonomous or semi-autonomous machine of claim 10.
Yet Arditi does not further teach:
The autonomous or semi-autonomous machine of claim 10, wherein
the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator.
However, Lie teaches:
the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator (see paragraph 0070 for “using a deep learning accelerator”. See Fig. 1 and paragraph 0469 for using the system on an autonomous vehicle. See 0475 for system using a “deep learning accelerator 120” and GPUs. See paragraph 0740 for using GPUs or CPU. See paragraph 0087-0088 for using “wafer-scale integration” of “multiple elements in the system for “wafer-scale integration” instead of “inter-chip interconnect”. See paragraph 0799 for the operations disclosed by Lie being performed by a “system-on-a-chip” (SoC).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi, Lie, and Frtunikj, to add the additional features of: the one or more hardware accelerators include at least one of a deep learning accelerator (DLA) or a vision accelerator, as taught by Lie. The motivation for doing so would be to “provide improvements in one or more of accuracy, performance, and energy efficiency,” as recognized by Lie (see paragraph 0065).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 12, the claim is substantially similar to claim 3. Please see the rejection for that claim.
Regarding claim 13, the claim is substantially similar to claim 4. Please see the rejection for that claim.
Regarding claim 14, the claim is substantially similar to claim 2. Please see the rejection for that claim.
Regarding claim 15, the claim is substantially similar to claim 7. Please see the rejection for that claim.
Claims 8, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Arditi et al. (US2019/0197430) in view of Lie et al. (US2018/0314941) in further view of Myers et al. (US2018/0074495).
Regarding claim 8, Arditi and Lie teach the autonomous or semi-autonomous machine of claim 7.
Yet Arditi and Lie do not further teach:
The autonomous or semi-autonomous machine of claim 7, wherein
at least one passenger of the one or more passengers is picked up based at least on identifying the at least one passenger using a first subset of the sensor data obtained using the one or more first sensors, and the at least one passenger is monitored between pick up and drop off using a second subset of the sensor data obtained using the one or more second sensors.
However, Myers teaches:
at least one passenger of the one or more passengers is picked up based at least on identifying the at least one passenger using a first subset of the sensor data obtained using the one or more first sensors (see Fig. 1 and paragraph 0020 for monitoring module 104 configured to “identify passengers, authenticate passengers, monitor passenger activity, and monitor passengers entering and exiting the vehicle”. See paragraph 0026 for module 104 using “a facial recognition algorithm that identifies a face of a person approaching the vehicle”. Other biometric information can also be used.), and the at least one passenger is monitored between pick up and drop off using a second subset of the sensor data obtained using the one or more second sensors (see paragraph 0028 for a passenger analysis module 214 that monitors passengers inside the vehicle. See paragraph 0030 and Fig. 3 for the vehicle 300 have interior cameras that monitor passengers. See paragraph 0034 for doing this using a deep neural networks.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Arditi and Lie to add the additional features of: at least one passenger of the one or more passengers is picked up based at least on identifying the at least one passenger using a first subset of the sensor data obtained using the one or more first sensors, and the at least one passenger is monitored between pick up and drop off using a second subset of the sensor data obtained using the one or more second sensors, as taught by Myers. The motivation for doing so would be to identify the correct person as the passenger and monitor the passenger activity and customize the vehicle operation accordingly, including driving manner and destination reminders, as recognized by Myers (see paragraphs 0002, 0028, and 0041).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Note that Arditi at least strongly teaches toward what Myers more explicitly teaches. See Arditi Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data from lidar or GPS for capturing an environment of the vehicle. See paragraph 0078 for this data including gestures. See paragraph 0037 for recognizing motion of gestures. See Fig. 10, step 1060, for paragraph 0078 for step 1060 being to obtain sensor data of the “ride requestor….seated in the vehicle.”
Regarding claim 16, the claim is substantially similar to claim 8. Please see the rejection for that claim.
Regarding claim 20, the claim is substantially similar to claim 8. Please see the rejection for that claim.
Additional Art
The prior art made of record here, though not relied upon, is considered pertinent to the present disclosure.
Tarjan et al. (US2011/0161616 A1), a Nvidia application, teaches in paragraph 002 that “modern GPUs are massively parallel processors”. This defines GPUs.
Holz et al. (U.S. 9,868,449). See col. 24, lines 47-col. 25, line 11 for a GPU used for gesture recognition of an occupant of an autonomous vehicle.
Askeland (U.S. 10,053,088), a Zoox disclosure. See col. 3, lines 39-59 for external sensors such as lidar and cameras. See Fig. 3A for “operation control system 300” using a GPU or CPU and GPU. See col. 17, lines 32-40 for obtaining interior sensor data 416. See col. 17, line 41-54 for a “vehicle interior” image data for “facial recognition”. See Fig. 3B for item 389 including processors 393. See col. 14, lines 1-10 for processor 393 including a GPU or a CPU and GPU.
Levinson et al. (US2017/0123428). See paragraph 0061 for using a GPU. See paragraph 0122 for using facial recognition of a passenger. The purposes of the facial recognition is to “grant ingress and egress” to users. The fact that egress is controlled using facial recognition implies there is an internal camera. See paragraph 0054 for a “perception engine” identifying “external objects”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL M. ROBERT whose telephone number is (571)270-5841. The examiner can normally be reached M-F 7:30-4:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 571-272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL M. ROBERT/Primary Examiner, Art Unit 3665