Prosecution Insights
Last updated: April 19, 2026
Application No. 17/793,666

MULTIPLE-TASK NEURAL NETWORKS

Final Rejection §101§103
Filed
Jul 19, 2022
Examiner
KIM, HARRISON CHAN YOUNG
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-5.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made final. Claims 1-19 are pending. Claims 1, 11 and 15 are independent claims. Response to Arguments With respect to the 35 U.S.C. 101 rejections of the previous office action, the applicant argues that the amended claims are not directed to an abstract idea without significantly more. The examiner respectfully disagrees – see the updated 35 U.S.C. 101 section below. With respect to the 35 U.S.C. 103 rejections of the previous office action, the applicant’s arguments are persuasive; however, the scope of the claims has changed and new grounds of rejection are applied – see the updated 35 U.S.C. 103 section below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 1 recites: A method… Claim 1 is directed to a process (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 1 recites: determining… from the multiple tasks, a first task for the first feature based on a type of application running on an apparatus (determining a task based on an application that is running is a mental process, i.e., determining an image recognition task from a camera application running)… This step can be performed mentally (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 1 recites: comprising: determining, with a processor, a feature vector using a first portion of a neural network, wherein the neural network is trained for multiple tasks… with the processor… and transmitting, with the processor, the feature vector to a remote device to perform the first task based on the determining of the first task for the first feature, wherein the remote device is to perform the first task using a second portion of the neural network. Determining a feature vector using a neural network and performing a task using the network are attempts to use the neural network by merely applying the abstract idea (i.e., perform the mental process) without placing any limits on how the neural network model operates. Further, the claim omits any details as to how the neural network model solves a technical problem and instead recites only the idea of a solution or outcome. See MPEP 2106.05(f). Thus, the limitation represents no more than mere instructions to implement the abstract idea which is equivalent to adding the words “apply it” to the recited judicial exception. Determining with a processor, specifically, is mere instructions to implement the abstract idea on a generic computer. Transmitting a feature vector to a remote device is insignificant extra-solution activity of sending data over a network (MPEP 2106.05(d)(II)), recited at a high level of generality, that does not offer a meaningful limitation to the neural network process (Step 2A prong 2: NO). Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they provide nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)) or are insignificant extra-solution activity. These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claims 2-10 and 16-18, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 2, specifying that there is overlap in the neural network is an additional element specifying a field of use without significantly more; Claim 3, describing that portions of the neural network are distributed over remote devices is an additional element specifying a field of use without significantly more; Claim 5, selecting a remote device from a set of remote devices is a mental process; Claim 6, obscuring data is a mental process or a mathematical calculation (i.e., applying a mask to input data); Claim 7, determining a second feature vector is repeating the mental process, and determining whether to transmit the vector is another mental process; Claim 8, determining the distance between two vectors and comparing the distance to a threshold are mathematical calculations; Claim 9, determining a change metric between features and determining if the change meets a criterion are mental processes or mathematical calculations; Claim 10, specifying that the data is frames in a frame sequence is a field of use additional element that does not provide a meaningful limitation to the neural network process; Claim 16, specifying that the feature vector indicates image, audio and text characteristics/attributes is an attempt to limit the field of use without significantly more; Claim 17, specifying that the feature vector indicates audio characteristics is an attempt to limit the field of use without significantly more; Claim 18, specifying that the feature vector indicates text attributes is an attempt to limit the field of use without significantly more). Regarding claim 11: Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 11 recites: An apparatus comprising: a memory; and a processor coupled to the memory, wherein the processor is to… Claim 11 is directed to an apparatus (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 11 recites: determine, from multiple tasks, a task for the first feature vector based on a type of application running on an apparatus; select a remote device corresponding to the determined task These steps can be performed mentally (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 11 recites: determine a first feature vector using a first portion of a neural network… and send the first feature vector to the selected remote device, wherein the remote device is to perform the determined task using a second portion of the neural network. Determining a feature vector using a neural network and performing the determined task using the network are attempts to use the neural network by merely applying the abstract idea (i.e., perform the mental process) without placing any limits on how the neural network model operates. Further, the claim omits any details as to how the neural network model solves a technical problem and instead recites only the idea of a solution or outcome. See MPEP 2106.05(f). Thus, the limitation represents no more than mere instructions to implement the abstract idea which is equivalent to adding the words “apply it” to the recited judicial exception. Sending the first feature vector to a remote device is insignificant extra-solution activity of sending data over a network (MPEP 2106.05(d)(II)), recited at a high level of generality, that does not offer a meaningful limitation to the neural network process (Step 2A prong 2: NO). Step 2B: These elements are recited at such a high level of generality that they fail to integrate the abstract idea into a practical application, since they provide nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)) or are insignificant extra-solution activity (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claims 12, 13 and 19, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 12, describing tasks being distributed on mutually exclusive neural network portions on remote devices is an attempt to implement the abstract idea on a generic computer, equivalent to adding the words “apply it” as it only recites the idea of a solution/outcome without details on how to accomplish the solution; Claim 13, determining a second feature vector is again recited at a high level of generality and provides nothing more than mere instructions to implement an abstract idea on a generic computer (MPEP 2106.05(f)), determining a distinctiveness and comparing the distinctiveness to a distinctiveness criterion is a mental process or mathematical calculation; Claim 19, describing the application as a camera, autonomous driving , interior design or transcription application is limiting the field of use without significantly more). Regarding claim 14, it is an apparatus that recites similar limitations to claim 11 and is rejected on the same grounds – see above. Regarding claim 15, it recites limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs. Distributing the neural network so that there are exclusive portions on different devices and performing concurrent inferences are recited at a high level of generality and provide nothing more than mere instructions to implement the abstract idea on a generic computer (MPEP 2106.05(f)) as they only recite the idea of a solution/outcome without details on how to accomplish the solution. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 11, 12, 14, 15 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tran et al. (US 20200286383 A1), herein Tran, in view of Nosko et al. (US 20190050714 A1), herein Nosko and Balassanian (US 20140036099 A1). Regarding claim 1, Tran teaches: A method, comprising: determining, with a processor, a feature vector using a first portion of a neural network, wherein the neural network is trained for multiple tasks… and transmitting, with the processor, the feature vector… to perform the first task using a second portion of the neural network (¶17, In particular, the input image is first passed through a feature extraction module which extracts features for sharing across different perception tasks. Those shared features are then fed to task-specific branches with each performing one or more perception tasks). Tran fails to teach: transmitting the feature vector to a remote device to perform the first task… wherein the remote device is to perform one of the multiple tasks using a second portion of the neural network. However, in the same field of endeavor, Nosko teaches: transmitting the feature vector to a remote device to perform the first task… wherein the remote device is to perform one of the multiple tasks using a second portion of the neural network (¶48, some of the network modules may be executed in a distributed manner, i.e. on different devices and/or platforms). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute the network over different devices as disclosed by Nosko in the multi-task network disclosed by Tran to save computational resources (¶48, by executing a modular neural network in a distributed manner, system 100 may save and/or manage computation resources, energy and/or network traffic). Tran in view of Nosko fails to teach: determining, with the processor, from the multiple tasks, a first task for the first feature based on a type of application running on an apparatus… based on the determining of the first task for the first feature… However, in the same field of endeavor, Balassanian teaches: determining, with the processor, from the multiple tasks, a first task for the first feature based on a type of application running on an apparatus… based on the determining of the first task for the first feature (Abstract, Techniques are disclosed relating to prediction of desired information types for image scanning In some embodiments, a scanner is configured to predict a desired information type based on applications (e.g., running on a device, displayed on a device, or recently opened on a device) – and – ¶7, In some embodiments, information types may include: payment information, contact information, text/document information, bill information, receipt information, image information (e.g., a photograph), drawing information, barcode information – determining information types is analogous to determining tasks). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine a specific task based on a type of application running on an apparatus as disclosed by Balassanian in the multi-task neural network disclosed by Tran in view of Nosko to simplify processing (¶5, Further, more generalized systems may be complex and expensive because they often try to recognize objects among a large universe of objects without a sense of a smaller relevant set of objects for which to scan – and – ¶29, Searching for a relatively small set of known object types may greatly simplify complexity of image processing and may thus reduce processing time before automatically extracting information). Regarding claim 2, Tran further teaches: The method of claim 1, wherein the first portion of the neural network overlaps for each of the multiple tasks (¶17, In particular, the input image is first passed through a feature extraction module which extracts features for sharing across different perception tasks). Regarding claim 3, Tran teaches: The method of claim 1, wherein the first portion of the neural network is stored in an apparatus (¶17, In particular, the input image is first passed through a feature extraction module which extracts features for sharing across different perception tasks). Tran fails to teach: wherein other portions of the neural network, respectively corresponding to each of the multiple tasks, are distributed over a set of remote devices. However, in the same field of endeavor, Nosko teaches: wherein other portions of the neural network, respectively corresponding to each of the multiple tasks, are distributed over a set of remote devices (¶48, some of the network modules may be executed in a distributed manner, i.e. on different devices and/or platforms). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute the network over different devices as disclosed by Nosko in the multi-task network disclosed by Tran in view of Balassanian to save computational resources (¶48, by executing a modular neural network in a distributed manner, system 100 may save and/or manage computation resources, energy and/or network traffic). Regarding claim 4, Tran further teaches: The method of claim 1, wherein each of the multiple tasks corresponds to a mutually exclusive portion of the neural network relative to each of the other multiple tasks (¶56, processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each of the respective different branches corresponds to a respective one of the different perception tasks). Regarding claim 5, Tran fails to teach: The method of claim 1, further comprising selecting the remote device from a set of remote devices. However, in the same field of endeavor, Nosko teaches selecting the remote device from a set of remote devices (¶49, controller 10 may adapt distributed modular neural network 300 and change dynamically which of the network modules are executed on which device). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the remote device from a set of remote devices as disclosed by Nosko in the multi-task network disclosed by Tran in view of Balassanian to balance computational resource use (¶49, based on the available computational and/or communication resources on each device and/or platform). Regarding claim 6, Tran further teaches: The method of claim 1, wherein determining the feature vector comprises obscuring data input to the first portion of the neural network (¶44, Regarding the shared feature extraction component 410, the same is represented as a convolutional neural network (CNN) – performing convolution operations on an input image or video frame will result in the input data being obscured). Regarding claim 11, Tran teaches: An apparatus, comprising: a memory; and a processor coupled to the memory, wherein the processor is to: determine a first feature vector using a first portion of a neural network… perform the determined task using a second portion of the neural network (¶17, In particular, the input image is first passed through a feature extraction module which extracts features for sharing across different perception tasks. Those shared features are then fed to task-specific branches with each performing one or more perception tasks) Tran fails to teach: select a remote device corresponding to the determined task; and send the first feature vector to the selected remote device, wherein the remote device is to perform the determined task (Tran does teach selecting branches for tasks – ¶17, shared features are then fed to task-specific branches with each performing one or more perception tasks – but not selecting remote devices). However, in the same field of endeavor, Nosko teaches: select a remote device corresponding to the determined task; and send the first feature vector to the selected remote device, wherein the remote device is to perform the determined task (¶48, some of the network modules may be executed in a distributed manner, i.e. on different devices and/or platforms). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute the network over different devices as disclosed by Nosko in the multi-task network disclosed by Tran to save computational resources (¶48, by executing a modular neural network in a distributed manner, system 100 may save and/or manage computation resources, energy and/or network traffic). Tran in view of Nosko fails to teach: determine, from multiple tasks, a task for the first feature vector based on a type of application running on an apparatus. However, in the same field of endeavor, Balassanian teaches: determine, from multiple tasks, a task for the first feature vector based on a type of application running on an apparatus (Abstract, Techniques are disclosed relating to prediction of desired information types for image scanning In some embodiments, a scanner is configured to predict a desired information type based on applications (e.g., running on a device, displayed on a device, or recently opened on a device) – and – ¶7, In some embodiments, information types may include: payment information, contact information, text/document information, bill information, receipt information, image information (e.g., a photograph), drawing information, barcode information – determining information types is analogous to determining tasks). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine a specific task based on a type of application running on an apparatus as disclosed by Balassanian in the multi-task neural network apparatus disclosed by Tran in view of Nosko to simplify processing (¶5, Further, more generalized systems may be complex and expensive because they often try to recognize objects among a large universe of objects without a sense of a smaller relevant set of objects for which to scan – and – ¶29, Searching for a relatively small set of known object types may greatly simplify complexity of image processing and may thus reduce processing time before automatically extracting information). Regarding claim 12, Tran further teaches: The apparatus of claim 11, wherein the multiple tasks respectively correspond to… portions of the neural network that are mutually exclusive from each other (¶56, processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each of the respective different branches corresponds to a respective one of the different perception tasks). Tran in view of Balassanian fails to teach: wherein the portions are remote…, and wherein the multiple tasks are distributed over multiple remote devices. However, in the same field of endeavor, Nosko teaches: wherein the portions are remote…, and wherein the multiple tasks are distributed over multiple remote devices (¶48, some of the network modules may be executed in a distributed manner, i.e. on different devices and/or platforms). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute the processing of the multi-task neural network disclosed by Tran in view of Balassanian on different devices as disclosed by Nosko to save computational resources (¶48, by executing a modular neural network in a distributed manner, system 100 may save and/or manage computation resources, energy and/or network traffic). Regarding claim 14, Tran teaches: A non-transitory tangible computer-readable medium storing executable code, comprising… code to cause the processor to determine the inference using an exclusive portion of a neural network based on a feature vector determined by a… apparatus using a shared portion of the neural network; and code to cause the processor to transmit the inference to the… apparatus. (¶17, In particular, the input image is first passed through a feature extraction module which extracts features for sharing across different perception tasks. Those shared features are then fed to task-specific branches with each performing one or more perception tasks). Tran fails to teach a feature vector determined: by a remote apparatus… the remote apparatus. However, in the same field of endeavor, Nosko teaches determing: by a remote apparatus… the remote apparatus (¶48, some of the network modules may be executed in a distributed manner, i.e. on different devices and/or platforms). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute the network over different devices as disclosed by Nosko in the multi-task network disclosed by Tran to save computational resources (¶48, by executing a modular neural network in a distributed manner, system 100 may save and/or manage computation resources, energy and/or network traffic). Tran in view of Nosko fails to teach: code to cause a processor to determine a task indicating an inference based on a type of application running on an apparatus. However, in the same field of endeavor, Balassanian teaches: code to cause a processor to determine a task indicating an inference based on a type of application running on an apparatus (Abstract, Techniques are disclosed relating to prediction of desired information types for image scanning In some embodiments, a scanner is configured to predict a desired information type based on applications (e.g., running on a device, displayed on a device, or recently opened on a device) – and – ¶7, In some embodiments, information types may include: payment information, contact information, text/document information, bill information, receipt information, image information (e.g., a photograph), drawing information, barcode information – determining information types is analogous to determining tasks). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine a specific task based on a type of application running on an apparatus as disclosed by Balassanian in the multi-task neural network apparatus disclosed by Tran in view of Nosko to simplify processing (¶5, Further, more generalized systems may be complex and expensive because they often try to recognize objects among a large universe of objects without a sense of a smaller relevant set of objects for which to scan – and – ¶29, Searching for a relatively small set of known object types may greatly simplify complexity of image processing and may thus reduce processing time before automatically extracting information). Regarding claim 15, Tran further teaches: The computer-readable medium of claim 14, wherein the inference is determined concurrently with a second inference determined by a remote device using a second exclusive portion of the neural network (Abstract, The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features). Regarding claim 19, Tran further teaches: The apparatus of claim 11, wherein the type of application comprises at least one of: a camera application, an autonomous driving application, an interior design application, or a transcription application (¶6, The hardware processor also runs the program code to control an operation of the vehicle for collision avoidance responsive to the at least one top-view map indicating an impending collision – specifically, Tran runs code that can control a vehicle to avoid collisions, i.e., an autonomous driving application). Claim(s) 7, 8, 9, 10 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Nosko and Balssanian as applied to claims 1 and 11 above, and further in view of Aliamiri et al. (US 20210142068 A1), herein Aliamiri. Regarding claim 7, Tran in view of Nosko and Balassanian fails to teach: The method of claim 1, wherein the feature vector corresponds to first data, and wherein the method further comprises: determining a second feature vector corresponding to second data using the first portion of the neural network; and determining whether to transmit the second feature vector. However, in the same field of endeavor, Aliamiri teaches: wherein the feature vector corresponds to first data, and wherein the method further comprises: determining a second feature vector corresponding to second data using the first portion of the neural network; and determining whether to transmit the second feature vector (¶62, The feature vector of that frame is compared against the feature vectors of the other reference frames… If the similarity score of the feature vector of the current frame fails to satisfy a dissimilarity threshold… then it is discarded. Otherwise, the current frame is added to the collection of selected reference frames). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to transmit feature vectors based on a determination process as disclosed by Aliamiri in the multi-task network disclosed by Tran in view of Nosko and Balssanian to discard redundant data (¶41, Reducing or decimating the data through the selection of portions of a data stream reduces storage costs and also reduces human annotation costs, while retaining the benefit of obtaining additional data for improving the performance of trained algorithms). Regarding claim 8, Tran in view of Nosko and Balassanian fails to teach: The method of claim 7, wherein determining whether to transmit the second feature vector comprises: determining a distance between the feature vector and the second feature vector; and comparing the distance to a distance threshold. However, in the same field of endeavor, Aliamiri teaches: wherein determining whether to transmit the second feature vector comprises: determining a distance between the feature vector and the second feature vector; and comparing the distance to a distance threshold (¶62, the similarity score from the comparison of two feature vectors is computed using, for example, the L1 norm (or Manhattan distance) or the L2 norm (or Euclidian distance) between those two feature vectors. If the similarity score of the feature vector of the current frame fails to satisfy a dissimilarity threshold… then it is discarded). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine if a feature vector should be transmitted using a distance and threshold as disclosed by Aliamiri in the multi-task network disclosed by Tran in view of Nosko and Balassanian to discard redundant data (¶41, Reducing or decimating the data through the selection of portions of a data stream reduces storage costs and also reduces human annotation costs, while retaining the benefit of obtaining additional data for improving the performance of trained algorithms). Regarding claim 9, Tran in view of Nosko and Balassanian fails to teach: The method of claim 7, wherein determining whether to transmit the second feature vector comprises: determining a change metric between each feature of the feature vector and a corresponding feature of the second feature vector; and determining whether the change metric meets a change criterion. However, in the same field of endeavor, Aliamiri teaches: wherein determining whether to transmit the second feature vector comprises: determining a change metric between each feature of the feature vector and a corresponding feature of the second feature vector; and determining whether the change metric meets a change criterion (¶62, the similarity score from the comparison of two feature vectors is computed using, for example, the L1 norm (or Manhattan distance) or the L2 norm (or Euclidian distance) between those two feature vectors. If the similarity score of the feature vector of the current frame fails to satisfy a dissimilarity threshold… then it is discarded). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine if a feature vector should be transmitted using a change metric and change criterion as disclosed by Aliamiri in the multi-task network disclosed by Tran in view of Nosko and Balassanian to discard redundant data (¶41, Reducing or decimating the data through the selection of portions of a data stream reduces storage costs and also reduces human annotation costs, while retaining the benefit of obtaining additional data for improving the performance of trained algorithms). Regarding claim 10, Tran further teaches: The method of claim 7, wherein the first data is a first frame and the second data is a second frame in a frame sequence (¶28, given an input video 210, the perception network 220 processes each frame separately in a single forward pass and produces rich per-frame outputs). Regarding claim 13, Tran further teaches: The apparatus of claim 11, wherein the first feature vector corresponds to a first frame, and wherein the processor is to: determine a second feature vector corresponding to a second frame using the first portion of the neural network (¶28, given an input video 210, the perception network 220 processes each frame separately in a single forward pass and produces rich per-frame outputs). Tran in view of Nosko and Balassanian fails to teach a processor to further: determine a distinctiveness of the second feature vector based on the first feature vector; and send the second feature vector to the selected remote device in response to determining that the second feature vector satisfies a distinctiveness criterion. However, in the same field of endeavor, Aliamiri teaches: a processor to determine a distinctiveness of the second feature vector based on the first feature vector; and send the second feature vector to the selected remote device in response to determining that the second feature vector satisfies a distinctiveness criterion (¶62, the similarity score from the comparison of two feature vectors is computed using, for example, the L1 norm (or Manhattan distance) or the L2 norm (or Euclidian distance) between those two feature vectors. If the similarity score of the feature vector of the current frame fails to satisfy a dissimilarity threshold… then it is discarded). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine if a feature vector should be transmitted using a change metric and change criterion as disclosed by Aliamiri in the multi-task network disclosed by Tran in view of Nosko and Balassanian to discard redundant data (¶41, Reducing or decimating the data through the selection of portions of a data stream reduces storage costs and also reduces human annotation costs, while retaining the benefit of obtaining additional data for improving the performance of trained algorithms). Claim(s) 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tran in view of Nosko and Balssanian as applied to claim 1 above, and further in view of Madhavan et al. (US 20210216916 A1), herein Madhavan. Regarding claim 16, Tran in view of Nosko and Balssanian fails to teach: The method of claim 1, wherein the feature vector indicates a plurality of image characteristics, a plurality of audio characteristics, and a plurality of text attributes. However, in the same field of endeavor, Madhavan teaches: wherein the feature vector indicates a plurality of image characteristics, a plurality of audio characteristics, and a plurality of text attributes (¶20, In particular, given a test data point (e.g., feature vector) for a content item (e.g., text, image, photo, audio, video, or a combination thereof) to be classified, the region of the region set to which the test data point belongs is determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include image, audio and text info in the feature vector as disclosed by Madhavan in the multi-task neural network as disclosed by Tran in view of Nosko and Balassanian to handle a wider variety of data types (¶28, Such content items may include news items, job postings, advertisements, comments, posts, or other discrete text or media items (e.g., text, audio, images, photos, video, or a combination thereof) that appear in the end-users' online social or professional network content feeds). Regarding claim 17, Tran in view of Nosko and Balssanian fails to teach: The method of claim 1, wherein the feature vector indicates a plurality of audio characteristics. However, in the same field of endeavor, Madhavan teaches: wherein the feature vector indicates a plurality of audio characteristics (¶20, In particular, given a test data point (e.g., feature vector) for a content item (e.g., text, image, photo, audio, video, or a combination thereof) to be classified, the region of the region set to which the test data point belongs is determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include audio characteristics in the feature vector as disclosed by Madhavan in the multi-task neural network as disclosed by Tran in view of Nosko and Balassanian to handle a wider variety of data types (¶28, Such content items may include news items, job postings, advertisements, comments, posts, or other discrete text or media items (e.g., text, audio, images, photos, video, or a combination thereof) that appear in the end-users' online social or professional network content feeds). Regarding claim 18, Tran in view of Nosko and Balssanian fails to teach: The method of claim 1, wherein the feature vector indicates a plurality of text attributes. However, in the same field of endeavor, Madhavan teaches: wherein the feature vector indicates a plurality of text attributes (¶20, In particular, given a test data point (e.g., feature vector) for a content item (e.g., text, image, photo, audio, video, or a combination thereof) to be classified, the region of the region set to which the test data point belongs is determined). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include text attributes in the feature vector as disclosed by Madhavan in the multi-task neural network as disclosed by Tran in view of Nosko and Balassanian to handle a wider variety of data types (¶28, Such content items may include news items, job postings, advertisements, comments, posts, or other discrete text or media items (e.g., text, audio, images, photos, video, or a combination thereof) that appear in the end-users' online social or professional network content feeds). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON CHAN YOUNG KIM whose telephone number is (571)272-0713. The examiner can normally be reached Monday - Thursday 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HARRISON C KIM/ Examiner, Art Unit 2145 /CHAU T NGUYEN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Jul 19, 2022
Application Filed
Jun 23, 2025
Non-Final Rejection — §101, §103
Sep 29, 2025
Response Filed
Dec 23, 2025
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month