DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s response to the last Office Action, filed 1/5/2026, has been entered and made of record.
Applicant has amended claims 1, 5, 10, 14-15, and 20. Claims 1-20 are currently pending.
Applicant's arguments filed 1/5/2026, with respect to the rejection of claim 1, 10, and 20 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
Regarding claims 1, 10, and 20, applicant argues that neither Pighi or Clifford teach that the analysis techniques are selected such that the at least two analysis techniques have computational and implementation diversity.
The examiner, after further search and considerations, disagrees. While the examiner looked to Clifford for this limitation, Pighi teaches in Col 43 lines 66-16 “The functions performed by the diagrams of FIGS. 1-13 may be implemented using one or more of a conventional general purpose processor, digital computer, microprocessor, microcontroller … Appropriate software, firmware, coding, routines, instructions, opcodes, microcode, and/or program modules may readily be prepared by skilled programmers based on the teachings of the disclosure, as will also be apparent to those skilled in the relevant art(s). The software is generally executed from a medium or several media by one or more of the processors of the machine implementation.” Clifford also teaches ¶26 operations of a soil targeting system can be distributed across multiple computer systems or computer program(s) running on one or more computers.
Pighi discloses diverse hardware "optimized for speed and/or efficiency." C16:L36. Pighi discloses the video processing pipeline 156 contains distinct circuits like the computer vision pipeline portion 162 and the disparity engine 164 (C13:L12-40).
Under Schulhauser, the “selecting” step is a conditional routine triggered by the presence of different data types. Pighi’s architecture requires routing different data to specific optimized pipelines, the selection is inherently required for the system to function ass described. Ex Parte Schulhauser, Appeal No. 2013-007847 (P.T.A.B. Apr. 28, 2016) (citing Bristol-Myers Squibb Co. v. Ben Venue Labs, Inc., 246 F.3d 1368, 1378 (Fed. Cir. 2001)). MPEP 2173.05(g).
However, Examiner updated search and found art that teaches and further clarifies this, (Sung). A PHOSITA would find it obvious to use a scheduler, as taught by Sung, to select between these sub-modules based on the data type to ensure each task is performed more efficiently. Examiner also found pertinent art Moudgill (U.S. Patent Pub. No. 2020/0394495) and Hudson (U.S. Patent Pub. No. 2024/0243911)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-8, 10-12, 14-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pighi (U.S. Patent No. 11659154) in view of Clifford (U.S. Patent Pub. No. 2023/0245335) in view of Sung, T.T.; Ha, J.; Kim, J.; Yahja, A.; Sohn, C.-B.; Ryu, B. DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling. Electronics 2020, 9, 936. hereinafter referred to as Sung.
Regarding Claim 1, Pighi teaches a method comprising:
analyzing sensor data corresponding to an operational area of a machine (Col 23 Line 37-38: FIG. 3, a diagram illustrating the vehicle (machine) camera system 100 capturing an all-around view is shown … the all-around view 254a-254d (operational area) may enable an all-around view (AVM) system. The AVM system may comprise four cameras (e.g., each camera may comprise a combination of one of the lenses 112a-112n (or a stereo pair of the lenses 112a-112n) and one of the capture devices 102a-102n)) using a plurality of selected analysis techniques, the plurality of analysis techniques selected such that: (Col 1 Lines 43-50: The first capture device may be configured to generate first pixel data and the second capture device may be configured to generate second pixel data. The processor may be configured to receive the first pixel data and the second pixel data, generate a vertical disparity (first technique) image in response to the first pixel data and the second pixel data, generate a virtual horizontal (second technique) disparity image in response to the first pixel data:)
at least two analysis techniques have a computational diversity by performing different types of computational analyses on the sensor data; and (Fig. 11, 608, 610, 612; Col 41 Lines 15-17: Referring to FIG. 11, a method (or process) 600 is shown. The method 600 may generate a virtual horizontal disparity image; Fig. 12, Col 42 Lines 6-8: Referring to FIG. 12, a method (or process) 650 is shown. The method 650 may generate a virtual horizontal disparity image using a directed acyclic graph; These figures show the techniques performed are different from each other.)
at least two analysis techniques have implementation diversity by being implemented on (Col 43 Lines 66-67: The functions performed by the diagrams of FIGS. 1-13 may be implemented using one or more of a conventional general purpose processor;)
determining a detection of an object within the operational area based at least on the analyzing; and (Fig. 11, 616; Col 41 Lines 50-60: Next, in the step 614, the processors 106a-106n may perform computer vision operations on the virtual horizontal disparity image VRTHIMG and the vertical disparity image VDISP. For example, the CNN module 150 may be further trained to perform computer vision operations (e.g., object detection, object recognition, object classification, etc.) and may use the disparity values to infer depth information for each object detected. Next, the method 600 may move to the decision step 616. In the decision step 616, the processor 106a-106n may determine whether an object has been detected.)
controlling one or more operations of the machine based at least on the detection (Col 41-42 Lines 65-4: If an object has been detected, the method 600 may move to the step 618. In the step 618, the decision module 158 may make decisions based on the presence of the object detected. For example, the decision module 158 may determine an autonomous vehicle maneuver to perform, determine a warning to provide to the driver 202, etc.)
While Pighi hints that the analysis could be performed on different computing hardware Pighi does not explicitly disclose this information. One with ordinary skill in the art could realize this is just a common design modification, but for clarity of the record examiner looks to Clifford to explicitly teach this limitation.
Clifford is in the same field of art of image analysis. Further, Clifford teaches at least two analysis techniques have implementation diversity by being implemented on different types of computing hardware (¶26 The operations performed by client device 106 and/or soil targeting system 102 may be distributed across multiple computer systems. In some implementations, soil targeting system 102 may be implemented as, for example, computer program(s) running on one or more computers in one or more locations that are coupled to each other through a network.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pighi by implanting different types of computing hardware for different analysis techniques that is taught by Clifford; thus, one of ordinary skilled in the art would be motivated to combine the references to effectively analyze the image (soil image in Clifford) (Clifford ¶2).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Sung is in the same field of art of computer processing. Further, Clifford teaches at least two analysis techniques having implementation diversity by being implemented on different types of computing hardware (Abstract - we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by a directed acyclic graph.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pighi by implanting different types of computing hardware for different analysis techniques using a scheduler that is taught by Sung; thus, one of ordinary skilled in the art would be motivated to achieve higher resource efficient tasks (Sung Introduction).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding Claim 2, Pighi in view of Clifford in view of Sung discloses the method of claim 1, wherein the sensor data comprises stereoscopic image data (Pighi, Col 41 Lines 22/37: In the step 604, the processors 106a-106n may receive the pixel data from one of the capture devices 102a-102b of the stereo camera 302. For example, the pixel data may be pixel data PXTL (or video frames) generated by the top capture device 102a of the vertically oriented stereo camera 302. Next, in the step 606, the processors 106a-106n may receive the pixel data from the other of the capture devices 102a-102b of the stereo camera 302. For example, the pixel data may be the pixel data PXBL (or video frames) generated by the bottom capture device 102b of the vertically oriented stereo camera 302. For example, the method 600 may be performed when the apparatus 100 operates in the virtual DSI generation mode of operation (e.g., one vertically oriented stereo camera 302 is implemented instead of the pair of vertically oriented stereo cameras 302l-302r).)
Regarding Claim 3, Pighi in view of Clifford in view of Sung discloses the method of claim 1, wherein the plurality of analysis techniques includes a plurality of depth techniques that vary in how depth from one or more reference points is determined (Pighi, Col 39 Lines 61-65: The processors 106a-106n may be further configured to infer depth by performing the analysis on the example video frame 550. The vertically oriented stereo camera 302 may provide disparity values that may be used to calculate the depth information; Col 40 Lines 13-17: The CNN module 150 may be configured to generate the virtual horizontally oriented disparity image VRTHIMG. The virtual horizontally oriented disparity image VRTHIMG may provide additional data points that may be used by the processors 106a-106n to infer depth information; Col 40 Lines 62-66: The CNN module 150 may be configured to complete the depth (or disparity) to generate the virtual horizontal disparity image VRTHIMG. The CNN module 150 may be configured to implement semi-global-matching and/or oracle.)
Regarding Claim 5, Pighi in view of Clifford in view of Sung discloses the method of claim 1, wherein the different types of computing hardware include one or more of: a central processing unit (CPU), a graphics processing unit (GPU), an optical flow accelerator (OFA), a deep learning accelerator (DLA), a programmable vision accelerator (PVA), a video input (VI) controller, an image signal processor (ISP), or a video image compositor (VIC) engine (Pighi, Col 43-44 Lines 66-28) (Clifford, ¶8 some implementations include one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s), and/or tensor processing unit(s) (TPU(s)) of one or more computing devices, where the one or more processors are operable to execute instructions stored in associated memory, and where the instructions are configured to cause performance of any of the aforementioned methods.)
Regarding Claim 6, Pighi in view of Clifford in view of Sung discloses the method of claim 1, wherein the determining of the detection of the object includes: determining a plurality of individual detection results that correspond to respective analysis techniques of the plurality of analysis techniques (Pighi, Col 38 Lines 42-46: The example video frame 550 may represent a video frame used by the processors 106a-106n to detect various objects using the data from the disparity images (e.g., the vertical disparity image and/or the virtual horizontal disparity image);) and determining an overall detection result based at least on the plurality of individual detection results (Pighi, Col 39 Lines 32-35: The object detection performed by the CNN module 150 may comprise a confidence level. The confidence level may provide an indication of how likely that the results of the object detection are accurate; the results from object detection taught in Col 38 are then used to be compared against a threshold confidence to determine an overall detection result.)
Regarding Claim 7, Pighi in view of Clifford in view of Sung discloses the method of claim 6, wherein the determining of the overall detection result includes: determining the overall detection result by combining the plurality of individual detection results, wherein one or more individual detection results of the plurality of individual detection results are weighted (Pighi, Col 10 Lines 61-63: The sensor fusion module 152 may adjust the weighting used to overlay the data to give more weight to reliable data and/or less weight to unreliable data) with respect to the combining based at least on respective confidence levels related to the one or more individual detection results (Pighi, Col 38 Lines 42-46: The example video frame 550 may represent a video frame used by the processors 106a-106n to detect various objects using the data from the disparity images (e.g., the vertical disparity image and/or the virtual horizontal disparity image); Col 39 Lines 32-35: The object detection performed by the CNN module 150 may comprise a confidence level. The confidence level may provide an indication of how likely that the results of the object detection are accurate; the results from object detection taught in Col 38 are then used to be compared against a threshold confidence to determine an overall detection result.)
Regarding Claim 8, Pighi in view of Clifford in view of Sung discloses the method of claim 6, wherein the determining of the overall detection result includes: determining the overall detection result based on whether a threshold number of the plurality of individual detection results indicate a detection (Pighi, Col 16 Lines 41-47: The processors 106a-106n may be configured to recognize objects. Objects may be recognized by interpreting numerical and/or symbolic information to determine that the visual data represents a particular type of object and/or feature. For example, the number of pixels and/or the colors of the pixels of the video data may be used to recognize portions of the video data as objects.)
Regarding claim 10, claim 10 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Pighi further teaching on: A system comprising: one or more processors to cause performance of operations (Col 3 Lines 50-64: Referring to FIG. 1, a diagram illustrating an embodiment of the present invention 100 is shown. The apparatus 100 generally comprises and/or communicates with blocks (or circuits) 102a-102n, a block (or circuit) 104, blocks (or circuits) 106a-106n, a block (or circuit) 108, a block (or circuit) 110, blocks (or circuits) 112a-112n, a block (or circuit) 114, a block (or circuit) 116, blocks (or circuits) 118a-118n and/or a block (or circuit) 120. The circuits 102a-102n may each implement a capture device. The circuits 104 may implement an interface circuit. The circuits 106a-106n may each implement a processor (or co-processors). In an example implementation, the circuits 106a-106n may each be implemented as a video processor and/or a computer vision processor. The circuit 108 may implement a memory.)
Claim 11 recites limitations similar to claim 2 and is rejected under the same rationale and reasoning.
Claim 12 recites limitations similar to claim 3 and is rejected under the same rationale and reasoning.
Claim 14 recites limitations similar to claim 5 and is rejected under the same rationale and reasoning.
Claim 15 recites limitations similar to claim 6 and is rejected under the same rationale and reasoning.
Claim 16 recites limitations similar to claim 7 and is rejected under the same rationale and reasoning.
Claim 17 recites limitations similar to claim 8 and is rejected under the same rationale and reasoning.
Regarding Claim 19, Pighi in view of Clifford in view of Sung discloses the system of claim 10, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system implementing one or more large language models (LLMs); a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Pighi, Col 2 Lines 44-45: Embodiments of the present invention may be implemented as part of a computer vision system of a vehicle; Col 12 Lines 48-50: For example, in an autonomous vehicle implementation, the decision making module 158 may determine which direction to turn.)
Regarding Claim 20, Pighi teaches a system comprising: processing circuitry to perform one or more operations associated with a machine based at least on:
Determining a final detection result based at least on two or more individual detection results computed using the two or more algorithms as selected (Pighi, Col 13 Lines 12-32: The video processing pipeline 156 is shown comprising a block (or circuit) 162 and/or a block (or circuit) 164. The circuit 162 may implement a computer vision pipeline portion. The circuit 164 may implement a disparity engine. The video processing pipeline 156 may comprise other components (not shown). The number and/or type of components implemented by the video processing pipeline 156 may be varied according to the design criteria of a particular implementation; computer vision pipeline portion 162 may be configured to implement a computer vision algorithm in dedicated hardware. The computer vision pipeline portion 162 may implement a number of sub-modules designed to perform various calculations used to perform feature detection in images (e.g., video frames). Implementing sub-modules may enable the hardware used to perform each type of calculation to be optimized for speed and/or efficiency. For example, the sub-modules may implement a number of relatively simple operations that are used frequently in computer vision operations that, together, may enable the computer vision algorithm to be performed in real-time)
While Pighi hints that the analysis could be performed on two or more distinct hardware components Pighi does not explicitly disclose this information. One with ordinary skill in the art could realize this is just a common design modification, but for clarity of the record examiner looks to Clifford to explicitly teach this limitation.
Clifford is in the same field of art of image analysis. Further, Clifford teaches at least two analysis techniques having implementation diversity by being implemented on different types of computing hardware (¶26 The operations performed by client device 106 and/or soil targeting system 102 may be distributed across multiple computer systems. In some implementations, soil targeting system 102 may be implemented as, for example, computer program(s) running on one or more computers in one or more locations that are coupled to each other through a network.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pighi by implanting different types of computing hardware for different analysis techniques that is taught by Clifford; thus, one of ordinary skilled in the art would be motivated to combine the references to effectively analyze the image (soil image in Clifford) (Clifford ¶2).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Sung is in the same field of art of computer processing. Further, Clifford teaches selecting at least two analysis techniques having implementation diversity by being implemented on different types of computing hardware (Abstract - we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by a directed acyclic graph.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pighi by implanting different types of computing hardware for different analysis techniques using a scheduler that is taught by Sung; thus, one of ordinary skilled in the art would be motivated to achieve higher resource efficient tasks (Sung Introduction).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pighi (U.S. Patent No. 11659154) in view of Clifford (U.S. Patent Pub. No. 2023/0245335) in view of (Sung) in view of Toyoda (U.S. Patent Pub. No. 2022/0055618).
Regarding Claim 9, Pighi in view of Clifford in view of Sung teaches the method of claim 8.
Pighi in view of Clifford does not explicitly disclose wherein the threshold number is based at least on a target safety level associated with the operational area.
Toyoda is in the same field of art of image analysis. Further, Toyoda teaches wherein the threshold number is based at least on a target safety level associated with the operational area (¶59 when the situation around the vehicle 10 corresponds to one of the following exceptional situations, it is supposed that the degree of safety of the vehicle 10 is relatively low. In this case, the computation control unit 32 sets the degree of omission of computation to a degree indicating that computation of the classifier will not be omitted; ¶65 Regarding situation 1, for example, the computation control unit 32 counts the number of predetermined objects around the vehicle 10, based on the result of detection of target objects around the vehicle 10 received from the object detection unit 31. Then, the computation control unit 32 compares the number of predetermined objects with the predetermined number, and determines that the situation around the vehicle 10 corresponds to situation 1, when the number of predetermined objects is not less than the predetermined number. The faster the speed of the vehicle 10, the smaller the computation control unit 32 may make the predetermined number. This enables the computation control unit 32 to more appropriately set the degree of omission of computation.)
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Pighi in view of Clifford in view of Sung by determining a threshold based on a target safety level that is taught by Toyoda; thus, one of ordinary skilled in the art would be motivated to combine the references to provide an apparatus for object detection that ensures the safety of a vehicle and can reduce the amount of computation required to detect an object around the vehicle (Toyoda ¶6).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claim 18 recites limitations similar to claim 9 and is rejected under the same rationale and reasoning.
Allowable Subject Matter
Claims 4 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claims 4 and 13, no prior art teaches wherein the plurality of depth techniques include at least a semi- global matching (SGM), a Bi3D, and an efficient semi-supervised depth (ESS).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Moudgill (U.S. Patent Pub. No. 2020/0394495) teaches ¶19 GSNN accelerator circuit may include a hardware circuit for accelerating the calculations of a neural network. At the top level, the GSNN accelerator circuit is a heterogenous computing system that may include multiple programmable circuit blocks that may run concurrently in parallel, where each circuit block may be designated to perform a specific kind of tasks (e.g., input, filter, post-processing, and output etc.).
Hudson (U.S. Patent Pub. No. 2024/0243911) teaches object detection using hashing. ¶104 teaches different systems often use different hashing and hash different data which means those system components that are not related or interoperable effectively do not communicate to each other because it is not technically nor financially feasible to try to force a homogenous hardware and custom software deployment which may dramatically change an edge systems role and how it performs with other edge systems. Edge systems often are heterogenous in nature for both their hardware and software, and are connected together either in a local mesh or to a core enterprise resource management systems.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN BILODEAU whose telephone number is (571)272-1032. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUSTIN BILODEAU/Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664