Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,450

DYNAMIC DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS

Non-Final OA §101§103
Filed
May 09, 2023
Examiner
TRAN, QUOC A
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
590 granted / 735 resolved
+25.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 735 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is Non-Final Office Action, in responses to Patent Application filed 05/09/2023; is a Continuation of 15494971, filed 04/24/2017, now U.S. Patent # 11797837; issued 10/24/2023. Claim(s) 1-20 are pending. Claim(s) 1, 9 and 17 is/are independent. In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement A signed and dated copy of applicant’s IDS, which was filed 07/12/2023, 01/02/2025 and 01/16/2026 is/are attached to this Office Action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101, because: The claimed invention is/are directed to a judicial exception (i.e., abstract idea) without significantly more. Step 1: YES (Claim(s) is/are process, machine, manufacture or composition of the matter) includes... “a graphics processor comprising: a memory device; a graphics processing cluster coupled with the memory device, the graphics processing cluster including a plurality of graphics multiprocessors, the plurality of graphics multiprocessors “interconnected via a data interconnect”, wherein a graphics multiprocessor of the plurality of graphics multiprocessors includes circuitry” configured to “load a modular neural network” including a plurality of subnetworks, each of the plurality of subnetworks trained to perform a computer vision operation on a separate subject, the “graphics multiprocessor” configured to: “ load weights associated with a baseline set of layers of the modular neural network” to the memory device; determine a first subject associated with a deployment environment; “load weights for a first subnetwork to the memory device”, the first subnetwork “trained to recognize the first subject; and perform a first matrix operation” associated with the first subnetwork to facilitate a first computer vision operation on a first image “captured within the deployment environment”, wherein the first image includes the first subject ...and therefore, fall into one of the four categories of patent eligible subject matter (process, machine, manufacture or composition of the matter). Step 2A : Prong One: ( whether a claim recites a judicial exception ?) the claim(s) recite ... ... “a graphics processor comprising: a memory device; a graphics processing cluster coupled with the memory device, the graphics processing cluster including a plurality of graphics multiprocessors, the plurality of graphics multiprocessors “interconnected via a data interconnect”, wherein a graphics multiprocessor of the plurality of graphics multiprocessors includes circuitry” configured to “load a modular neural network” including a plurality of subnetworks, each of the plurality of subnetworks trained to perform a computer vision operation on a separate subject, the “graphics multiprocessor” configured to: “ load weights associated with a baseline set of layers of the modular neural network” to the memory device; determine a first subject associated with a deployment environment; “load weights for a first subnetwork to the memory device”, the first subnetwork “trained to recognize the first subject; and perform a first matrix operation” associated with the first subnetwork to facilitate a first computer vision operation on a first image “captured within the deployment environment”, wherein the first image includes the first subject... ...These limitation(s) recite mathematical calculation; since the training of the machine model; to recognize the first subject; and perform a first matrix operation...then “Apply it” (i.e. load weights associated with a baseline set of layers of the modular neural network...) (as describes in US 20230334316 A1. Para 147; i.e., the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework ...which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operation...)..., “APPLY IT” (the weights associated with a baseline set of layers of the modular neural network” to the memory device). Step 2A : Prong Two: (Do the claim(s) recite “additional element(s) that integrate the “Judicial Exception” into “A Practical Application” ? The claim(s) recite additional limitation(s) such as: ... “graphics multiprocessor/ memory device ” ... to: load weights associated with a baseline set of layers of the modular neural network” to the memory device; that determine a first subject associated with a deployment environment; and“ load weights for a first subnetwork to the memory device, the first subnetwork “trained to recognize the first subject; and perform a first matrix operation” associated with the first subnetwork to facilitate a first computer vision operation on a first image captured within the deployment environment......it is noted, the improvement in the abstract idea itself ... but do not integrate the judicial exception into a practical application (see the US 20230334316 A1. Para 3 ; i.e., Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. The efficiency provided by parallel machine learning algorithm implementations allows the use of high-capacity networks and enables those networks to be trained on larger datasets) ... to load weights for a first subnetwork to the memory device, the first subnetwork trained to recognize the first subject; and perform a first matrix operation... These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not integrate the judicial exception into a practical application. (MPEP 2106.04(d), 2106.05(f)). Step 2B: (Whether a Claim Amounts to Significantly More) ? The claim(s) recite additional limitation(s) such as ... ... “graphics multiprocessor/ memory device ” to: load weights associated with a baseline set of layers of the modular neural network” to the memory device; that determine a first subject associated with a deployment environment; and“ load weights for a first subnetwork to the memory device, the first subnetwork “trained to recognize the first subject; and perform a first matrix operation” associated with the first subnetwork to facilitate a first computer vision operation on a first image captured within the deployment environment...These limitation(s) only recite a generic computer component(s) that only amounts to mere instructions to implement the abstract idea on a computer, and therefore, do not amount to significantly more than the abstract idea itself (MPEP 2106.05, 2106.04(d) and 2106.05(f)). As to the dependent claim(s) 2-8, 10-16 and 18-20, further recite, addition limitation(s) such as, (dynamically load weights for a second subnetwork, perform a second matrix operation, second image includes the second subject, output associated with the baseline set of layers of the modular neural network is provided as input the first subnetwork to perform the first computer vision operation and provided as input the second subnetwork to perform the second computer vision operation, apply a first priority to the first subnetwork; apply a second priority to the second subnetwork; and adjust a resource allocation within the graphics processing cluster based on the first priority and the second priority, load first optical flow data associated with a first sequence of images, perform a fourth matrix operation, facilitate detection of a second set of stationary objects of the second subject, second optical flow data includes dense optical flow data, determine velocities of moving objects, assign a hazard associated with the moving objects and second set of stationary objects based on a velocity and trajectory of the moving objects...etc.), These limitation(s) only amounts to mere instructions to implement the abstract idea ...and do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational. Accordingly, claims 1-20 fail to recite statutory subject matter, as defined in 35 U.S.C. 101. Claims Rejection – 35 U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 9-12 and 17-20 rejected under 35 U.S.C. 103 as being unpatentable over Haruki et al., (“US 20180121806 A1” filed 02/03/2017 [hereinafter “Haruki”], in view of Kanno et al., (“US 20180260687 A1” filed 04/26/2016 [hereinafter “Kanno”]. Independent Claim 1, Haruki teaches: A graphics processor comprising: a memory device; a graphics processing cluster coupled with the memory device, the graphics processing cluster including a plurality of graphics multiprocessors, the plurality of graphics multiprocessors interconnected via a data interconnect, wherein a graphics multiprocessor of the plurality of graphics multiprocessors, (In Haruki the Abstract and Para(s) 2 and 5, discloses the efficient parallel training of a neural network model on multiple graphics processing units; wherein the Artificial neural networks (ANNs) are computational models and are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. ANNs generally include sets of adaptive weights, i.e., numerical parameters that are tuned by a learning algorithm. The adaptive weights are, conceptually, connection strengths between “neurons,” which are activated during training and prediction...) Haruki further teaches: includes circuitry configured to load a modular neural network including a plurality of subnetworks, each of the plurality of subnetworks trained to perform a computer vision operation on a separate subject, (Haruki in Para(s) 3-4 and 19-20, discloses Deep Neural Networks (DNNs) typically incorporate large models trained on big datasets.... the Training is often accelerated by using graphics processing units (GPUs) and parallelizing the training with data parallelism. This is particularly effective for convolutional neural networks. The layers in convolutional neural networks usually start with convolutional layers having a small number of parameters and end with fully connected layers having a large number of parameters.... it is noted, the convolutional networks have wide applications in image and video recognition. ...) Haruki further teaches: determine a first subject associated with a deployment environment; (In Haruki Para 18 , discloses deep learning using a convolutional neural network is an effective tool for solving complex problems in computer vision, speech recognition, and natural language processing. For example, deep learning has been successfully used to recognize objects in digital images....) It is noted, Haruki discloses a method of efficient parallel training of a neural network model on multiple graphics processing units; wherein the training module collects gradients of multiple layers during backpropagation of training from a plurality of graphics processing units (GPUs),... However, Haruki does not expressly teach, But the combination of Haruki and Kanno teach,... the graphics multiprocessor configured to: load weights associated with a baseline set of layers of the modular neural network to the memory device;... load weights for a first subnetwork to the memory device, the first subnetwork trained to recognize the first subject; and perform a first matrix operation associated with the first subnetwork to facilitate a first computer vision operation on a first image captured within the deployment environment, wherein the first image includes the first subject...(In Kanno the Abstract and Para(s) 21-24, describing a method where a unit that performs an operation on data of a second layer using data of a first layer and performs an operation on data of the first layer using data of the second layer in a multi-layered neural network . It is the weight data of deciding a relation between each piece of data of the first layer and each piece of data of the second layer in both the operations ... and the weight data is stored in one storage holding unit as all weight coefficient matrices to be constructed. Further, an operation unit including product-sum operators which are constituent elements of the weight coefficient matrix and correspond to operations of matrix elements in a one-to-one manner is provided, and when the matrix elements constituting the weight coefficient matrix are stored in the storage holding unit, the matrix elements are stored using a row vector of the matrix as a basic unit, and the operation of the weight coefficient matrix is performed in basic units in which the storage is performed in the storage holding unit...Moreover, in a case in which the data of the first layer is calculated from the data of the second layer using the weight coefficient matrix, the data of the second layer is arranged similarly to the column vector of the matrix, and each element is input to the product-sum operator, at the same time, a first row of the weight coefficient matrix is input to the product-sum operator, a multiplication operation related to both pieces of data is performed, and an operation result is stored in the accumulator, when second or less rows of the weight coefficient matrix are calculated, the data of the second layer is shifted to the left or the right each time a row operation of the weight matrix is performed, and then a multiplication operation of element data of a corresponding row of the weight coefficient matrix and the arranged data of the second layer is performed, then, data stored in the accumulator of the same operation unit is added, and a similar operation is performed up to an N-th row of the weight coefficient matrix is provide ... Also, in Kanno Para(s) 138-141, further mentions the same target is imaged through a plurality of cameras, and an image recognition process is executed. Since a video captured by a camera 1 and a video captured by a camera 2 differ in position, the shapes of the subject are different although the same subject is imaged. Therefore, it is efficient since it is possible to acquire information at the same time under different conditions such as a photographing angle or a radiation degree of light rays and perform the recognition and the learning...utilizing the weight coefficient matrix is input to the product-sum operator as describes in Para(s) 21-24., ..) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki’s parallel training of a neural network model on multiple graphics processing units, to include a means said the graphics multiprocessor configured to: load weights associated with a baseline set of layers of the modular neural network to the memory device;... load weights for a first subnetwork to the memory device, the first subnetwork trained to recognize the first subject; and perform a first matrix operation associated with the first subnetwork to facilitate a first computer vision operation on a first image captured within the deployment environment, wherein the first image includes the first subject AS TAUGHT BY Kanno, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 2, Haruki and Kanno further teach: the graphics multiprocessor configured to: dynamically load weights for a second subnetwork to the memory device, the second subnetwork trained to recognize a second subject associated with the deployment environment; and perform a second matrix operation associated with the second subnetwork to facilitate a second computer vision operation on a second image captured within the deployment environment, wherein the second image includes the second subject; In Kanno the Abstract and Para(s) 21-24, describing a method where a unit that performs an operation on data of a second layer using data of a first layer and performs an operation on data of the first layer using data of the second layer in a multi-layered neural network . It is the weight data of deciding a relation between each piece of data of the first layer and each piece of data of the second layer in both the operations ... and the weight data is stored in one storage holding unit as all weight coefficient matrices to be constructed. Further, an operation unit including product-sum operators which are constituent elements of the weight coefficient matrix and correspond to operations of matrix elements in a one-to-one manner is provided, and when the matrix elements constituting the weight coefficient matrix are stored in the storage holding unit, the matrix elements are stored using a row vector of the matrix as a basic unit, and the operation of the weight coefficient matrix is performed in basic units in which the storage is performed in the storage holding unit...Moreover, in a case in which the data of the first layer is calculated from the data of the second layer using the weight coefficient matrix, the data of the second layer is arranged similarly to the column vector of the matrix, and each element is input to the product-sum operator, at the same time, a first row of the weight coefficient matrix is input to the product-sum operator, a multiplication operation related to both pieces of data is performed, and an operation result is stored in the accumulator, when second or less rows of the weight coefficient matrix are calculated, the data of the second layer is shifted to the left or the right each time a row operation of the weight matrix is performed, and then a multiplication operation of element data of a corresponding row of the weight coefficient matrix and the arranged data of the second layer is performed, then, data stored in the accumulator of the same operation unit is added, and a similar operation is performed up to an N-th row of the weight coefficient matrix is provide ... Also, in Kanno Para(s) 138-141, further mentions the same target is imaged through a plurality of cameras, and an image recognition process is executed. Since a video captured by a camera 1 and a video captured by a camera 2 differ in position, the shapes of the subject are different although the same subject is imaged. Therefore, it is efficient since it is possible to acquire information at the same time under different conditions such as a photographing angle or a radiation degree of light rays and perform the recognition and the learning...utilizing the weight coefficient matrix is input to the product-sum operator as describes in Para(s) 21-24., ..) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki’s parallel training of a neural network model on multiple graphics processing units, to include a means said dynamically load weights for a second subnetwork to the memory device, the second subnetwork trained to recognize a second subject associated with the deployment environment; and perform a second matrix operation associated with the second subnetwork to facilitate a second computer vision operation on a second image captured within the deployment environment, wherein the second image includes the second subject AS TAUGHT BY Kanno, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 3, Haruki and Kanno further teach: wherein the graphics multiprocessor is configured such that output associated with the baseline set of layers of the modular neural network is provided as input the first subnetwork to perform the first computer vision operation and provided as input the second subnetwork to perform the second computer vision operation; (In Kanno the Abstract and Para(s) 21-24, describing a method where a unit that performs an operation on data of a second layer using data of a first layer and performs an operation on data of the first layer using data of the second layer in a multi-layered neural network . It is the weight data of deciding a relation between each piece of data of the first layer and each piece of data of the second layer in both the operations ... and the weight data is stored in one storage holding unit as all weight coefficient matrices to be constructed. Further, an operation unit including product-sum operators which are constituent elements of the weight coefficient matrix and correspond to operations of matrix elements in a one-to-one manner is provided, and when the matrix elements constituting the weight coefficient matrix are stored in the storage holding unit, the matrix elements are stored using a row vector of the matrix as a basic unit, and the operation of the weight coefficient matrix is performed in basic units in which the storage is performed in the storage holding unit...Moreover, in a case in which the data of the first layer is calculated from the data of the second layer using the weight coefficient matrix, the data of the second layer is arranged similarly to the column vector of the matrix, and each element is input to the product-sum operator, at the same time, a first row of the weight coefficient matrix is input to the product-sum operator, a multiplication operation related to both pieces of data is performed, and an operation result is stored in the accumulator, when second or less rows of the weight coefficient matrix are calculated, the data of the second layer is shifted to the left or the right each time a row operation of the weight matrix is performed, and then a multiplication operation of element data of a corresponding row of the weight coefficient matrix and the arranged data of the second layer is performed, then, data stored in the accumulator of the same operation unit is added, and a similar operation is performed up to an N-th row of the weight coefficient matrix is provide ... Also, in Kanno Para(s) 138-141, further mentions the same target is imaged through a plurality of cameras, and an image recognition process is executed. Since a video captured by a camera 1 and a video captured by a camera 2 differ in position, the shapes of the subject are different although the same subject is imaged. Therefore, it is efficient since it is possible to acquire information at the same time under different conditions such as a photographing angle or a radiation degree of light rays and perform the recognition and the learning...utilizing the weight coefficient matrix is input to the product-sum operator as describes in Para(s) 21-24., ..) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki’s parallel training of a neural network model on multiple graphics processing units, to include a means said output associated with the baseline set of layers of the modular neural network is provided as input the first subnetwork to perform the first computer vision operation and provided as input the second subnetwork to perform the second computer vision operation AS TAUGHT BY Kanno, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 4, Haruki and Kanno further teach: the graphics multiprocessor configured to: apply a first priority to the first subnetwork; apply a second priority to the second subnetwork; and adjust a resource allocation within the graphics processing cluster based on the first priority and the second priority; ( In Kanno Para(s) 106-107 and FIG. 7, illustrates a data operation technique in efficiently operating the hierarchy type DNN system in a as a FIFO order from an upper-level hierarchy to a lower-level hierarchy (DNN1, DNN2....DNNn, and so on...) if the recognition score information obtained by performing the recognition process in the DNN1 and the neural network configuration information and the weight coefficient information of the DNN1 device are simultaneously stored, the efficiency is good when additional learning is performed in the second hierarchy machine learning/recognizing device DNN2...and so... (in the BRI, is recognized as adjust a resource allocation within the graphics processing cluster based on the first priority and the second priority...as claimed.) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki’s parallel training of a neural network model on multiple graphics processing units, to include a means said apply a first priority to the first subnetwork; apply a second priority to the second subnetwork; and adjust a resource allocation within the graphics processing cluster based on the first priority and the second priority AS TAUGHT BY Kanno, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Regarding Claim(s) 9-12 (respectively) is/are fully incorporated similar subject of claim(s) 1-4 (respectively) cited above. Regarding Claim(s) 17-20 (respectively) is/are fully incorporated similar subject of claim(s) 1-4 (respectively) cited above. Claim(s) 5-8 and 13-16 rejected under 35 U.S.C. 103 as being unpatentable over Haruki et al., (“US 20180121806 A1” filed 02/03/2017 [hereinafter “Haruki”], in view of Kanno et al., (“US 20180260687 A1” filed 04/26/2016 [hereinafter “Kanno”], and further in view of MAHMOUDI et al., NPL (“Real-time motion tracking using optical flow on multiple GPUs”) Published 2014 – 12 pages (pages 139-150) [[hereinafter “Mahmoodi”], Claim 5, It is noted, the GPUs of Haruki and Kanno do not expressly teach, but the combination of Haruki and Kanno and Mahmoudi teach the limitations said,... load first optical flow data associated with a first sequence of images that includes the first image; and perform a third matrix operation associated with the first subnetwork to facilitate detection of a first set of stationary objects of the first subject, the third matrix operation performed based on the first optical flow data and the first sequence of images; (In Mahmoudi Pages 131 -146 begin section 4. “Motion tracking algorithm”, discloses the GPUs-based motion tracking using the optical flow and matrix operation performed based on the optical flow data and the sequence of images/video.( set of stationary objects such as located on the static objects like trees or a building... .) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki/Kanno’s parallel training of a neural network model on multiple graphics processing units, to include a means said ,... load first optical flow data associated with a first sequence of images that includes the first image;... , the third matrix operation performed based on the first optical flow data and the first sequence of images.. AS TAUGHT BY Mahmoudi, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 6, Haruki and Kanno and Mahmoudi further teach: the graphics multiprocessor configured to: load second optical flow data associated with a second sequence of images that includes the second image; and perform a fourth matrix operation associated with the second subnetwork to facilitate detection of a second set of stationary objects of the second subject, ...(In Kanno Para 142, further mentions a machine learning system using different sensors (for example, a camera and a microphone..(, i.e., detecting set of stationary objects of the first/second/third/fourth...N.. subject) ..Moreover Kanno Para(s) 21-24, mentions an operation unit including product-sum operators which are constituent elements of the weight coefficient matrix and correspond to operations of matrix elements in a one-to-one manner is provided, and when the matrix elements constituting the weight coefficient matrix are stored in the storage holding unit, the matrix elements are stored using a row vector of the matrix as a basic unit, and the operation of the weight coefficient matrix is performed in basic ...i.e., perform a first second, third, fourth...n... matrix operation associated with the ANNs..) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki and Mahmoodi’s multiple graphics processing units, to include a means said load second optical flow data associated with a second sequence of images that includes the second image; and perform a fourth matrix operation associated with the second subnetwork to facilitate detection of a second set of stationary objects of the second subject... AS TAUGHT BY Kanno, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. It is noted, the GPUs of Haruki and Kanno do not expressly teach, but the combination of Haruki and Kanno and Mahmoudi teach the limitations said the fourth matrix operation performed based on the second optical flow data and the second sequence of images; (In Mahmoudi Pages 131 -146 begin section 4. “Motion tracking algorithm”, discloses the GPUs-based motion tracking using the optical flow and matrix operation performed based on the optical flow data and the sequence of images/video...) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki/Kanno’s parallel training of a neural network model on multiple graphics processing units, to include a means said ,... the fourth matrix operation performed based on the second optical flow data and the second sequence of images.. AS TAUGHT BY Mahmoudi, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 7, Haruki and Kanno and Mahmoudi further teach: wherein the first optical flow data and the second optical flow data includes dense optical flow data, (In Mahmoudi Pages 131 -146 begin section 4. “Motion tracking algorithm”, discloses the GPUs-based motion tracking using the optical flow and matrix operation performed based on the optical flow data and the sequence of images/video...and dense optical flow which tracks all frame pixels without selecting any features..) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki/Kanno’s parallel training of a neural network model on multiple graphics processing units, to include a means said ,... wherein the first optical flow data and the second optical flow data includes dense optical flow data.. AS TAUGHT BY Mahmoudi, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Claim 8, Haruki and Kanno and Mahmoudi further teach: the graphics multiprocessor configured to perform operations associated with the first subnetwork and the second subnetwork, the operations cause the graphics multiprocessor to: (is/are fully incorporated similar subject of claim(s) 1-2 cited above) and further in view of the following: determine velocities of moving objects of the first subject and the second subject; and assign a hazard associated with the moving objects to the first set of stationary objects and the second set of stationary objects based on a velocity and trajectory of the moving objects, (In Mahmoudi Pages 131 -146 begin section 4. “Motion tracking algorithm”, discloses the GPUs-based motion tracking using the optical flow and matrix operation performed based on the optical flow data and the sequence of images/video...and dense optical flow which tracks all frame pixels without selecting any features.. wherein the velocities of moving objects of the subject(s) assign a hazard associated with the moving objects to the stationary objects (static objects like trees and building and abnormal event(s) (i.e. hazardous) ... based on a velocity and trajectory of the moving objects... which are identified...) Accordingly, it would have been obvious to one having ordinary skill in the art at the time before the effective filing date of the claimed invention was made to modify Haruki/Kanno’s parallel training of a neural network model on multiple graphics processing units, to include a means said ,... determine velocities of moving objects of the first subject and the second subject; and assign a hazard associated with the moving objects to the first set of stationary objects and the second set of stationary objects based on a velocity and trajectory of the moving objects ... AS TAUGHT BY Mahmoudi, provides an improvement in a recognition rate by a convolutional neural network in an image recognition field. The deep learning can be applied to a wide variety of devices from image recognition terminals for automatic driving to cloud computing for big data analysis...[In Kanno Para 2-4]. It is noted the KSR ruling recommends references directed to similar subject matter to be combined. Regarding Claim(s) 13-16 (respectively) is/are fully incorporated similar subject of claim(s) 5-8 (respectively) cited above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jin et al., (“ US 20160321777 A1” filed 07/14/2016, relates to A parallel data processing method based on multiple graphic processing units (GPUs) is provided, including: creating, in a central processing unit (CPU), a plurality of worker threads for controlling a plurality of worker groups respectively, the worker groups including one or more GPUs; binding each worker thread to a corresponding GPU; loading a plurality of batches of training data from a nonvolatile memory to GPU video memories in the plurality of worker groups; and controlling the plurality of GPUs to perform data processing in parallel through the worker threads. The method can enhance efficiency of multi-GPU parallel data processing. In addition, a parallel data processing apparatus is further provided.... [the Abstract]. Chapelle et al., (“US 20130290223 A1, relates to distributed machine learning on a cluster including a plurality of nodes are disclosed. A machine learning process is performed in each of the plurality of nodes based on a respective subset of training data to calculate a local parameter. The training data is partitioned over the plurality of nodes. A plurality of operation nodes are determined from the plurality of nodes based on a status of the machine learning process performed in each of the plurality of nodes. The plurality of operation nodes are connected to form a network topology. An aggregated parameter is generated by merging local parameters calculated in each of the plurality of operation nodes in accordance with the network topology.... [the Abstract]. Kaiser et al., NPL (“NEURAL GPUS LEARN ALGORITHMS” Published 2016 – 9 pages, relates to using neural networks too, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with up-to 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results, we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.... [the Abstract]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC A TRAN/ Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586003
Method and Apparatus for Generating Operator
2y 5m to grant Granted Mar 24, 2026
Patent 12585951
METHOD AND ELECTRONIC DEVICE FOR GENERATING OPTIMAL NEURAL NETWORK (NN) MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12572772
SCALABLE DIGITAL TWIN SERVICE SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12561617
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561610
METHOD AND APPARATUS FOR PRESENTING CANDIDATE CHARACTER STRING, AND METHOD AND APPARATUS FOR TRAINING DISCRIMINATIVE MODEL
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 735 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month