Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the filing of 9-1-2025. Claims 1-2 and 4-20 are pending and have been considered below:
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2 and 4-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-2 and 4-20 represent system type claims. Therefore claims 1-2 and 4-20 are directed to either a process, machine, manufacture or composition of matter.
Regarding claims 1 and 14:
2A Prong 1:
finding that the stop condition is not reached, and changing multiple neural network weights; wherein the stop condition is related to similarities between the media unit signatures, wherein the similarities are based on shared features extracted by the neural network.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate if stop condition reached and adjust settings)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium that stores instructions
(mere instructions to apply the exception using a generic computer component)
initializing a neural network comprising max pooling layers and convolutional layers that provide at least one invariance through spatial dimension reduction;
performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled;
wherein each training iteration except the last training iteration comprises: processing more than 100,000 media units by the neural network to provide media unit signatures by mapping features from input dimensions to reduced output dimensions while maintaining the at least one invariance;
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium that stores instructions
(mere instructions to apply the exception using a generic computer component)
initializing a neural network comprising max pooling layers and convolutional layers that provide at least one invariance through spatial dimension reduction;
performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled;
wherein each training iteration except the last training iteration comprises: processing more than 100,000 media units by the neural network to provide media unit signatures by mapping features from input dimensions to reduced output dimensions while maintaining the at least one invariance;
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Regarding claims 2 and 15:
2A Prong 1:
finding that the stop condition is reached.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate if stop condition reached and adjust settings).
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
processing the vast number of media units by the neural network to provide media unit signatures (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
processing the vast number of media units by the neural network to provide media unit signatures (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Regarding claims 4 and 16:
2A Prong 1:
and finding that the stop condition is not reached, and changing multiple neural network weights; wherein the signatures similarities are related to one or more similarities between the cluster signatures.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate if stop condition reached and adjust settings)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
processing the vast number of media units by the neural network to provide the media unit signatures; clustering the media unit signatures to provide clusters of media unit signatures; generating cluster signatures, wherein a cluster signature is indicative of similarities between media unit signatures of the cluster; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
processing the vast number of media units by the neural network to provide the media unit signatures; clustering the media unit signatures to provide clusters of media unit signatures; generating cluster signatures, wherein a cluster signature is indicative of similarities between media unit signatures of the cluster; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Regarding claims 5 and 17:
2A Prong 1:
wherein the stop condition is a maximal distance between cluster signatures. As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate distance)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
Regarding claims 6 and 18:
2A Prong 1:
wherein the stop condition is a maximal average distance between cluster signatures.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate distance)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
Regarding claims 7 and 19:
2A Prong 1:
wherein the stop condition is an average distance between cluster signatures that exceeds a predefined threshold.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate distance)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
Regarding claim 8 and 20:
2A Prong 1:
No additional abstract ideas
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
wherein the at least one invariance comprises at least one of scale invariance and translation invariance, the translation being movements inside a media unit inputted to the neural network.
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Non-transitory computer readable medium
(mere instructions to apply the exception using a generic computer component)
wherein the at least one invariance comprises at least one of scale invariance and translation invariance.
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Regarding claim 9:
2A Prong 1:
finding that the stop condition is not reached, and changing multiple neural network weights; wherein the stop condition is related to a relationship between first media unit signatures of one or more sets of the first media units.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate if stop condition reached and adjust settings)
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
initializing a neural network;
performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled;
wherein each training iteration except the last training iteration comprises: processing a first group of media units and a second group of media units by the neural network to provide first media unit signatures and second media unit signatures; wherein the second group of the media units comprises more than 100,00 media units;
wherein the first group of media units comprises sets of media units that capture an object at different illumination and angle of view conditions; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
initializing a neural network;
performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled;
wherein each training iteration except the last training iteration comprises: processing a first group of media units and a second group of media units by the neural network to provide first media unit signatures and second media unit signatures; wherein the second group of the media units comprises more than 100,00 media units;
wherein the first group of media units comprises sets of media units that capture an object at different illumination and angle of view conditions; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Regarding claim 10:
2A Prong 1:
wherein the stop condition is that all first media units signatures are equal to each other.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate difference)
2A Prong 2: This judicial exception is not integrated into a practical application.
No Additional elements:
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
No Additional elements:
Regarding claim 11:
2A Prong 1:
wherein the stop condition is that all first media units signatures are similar to each other.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate similarity)
2A Prong 2: This judicial exception is not integrated into a practical application.
No Additional elements:
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
No Additional elements:
Regarding claim 12:
2A Prong 1:
wherein the stop condition is indifferent to a relationship between the first media unit signatures.
As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can evaluate similarity)
2A Prong 2: This judicial exception is not integrated into a practical application.
No Additional elements:
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
No Additional elements:
Regarding claim 13:
2A Prong 1:
No additional abstract ideas
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
wherein the neural network provides at least one invariance that comprises at least one of scale invariance and translation invariance, the translation being movements inside a media unit inputted to the neural network.
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
wherein the at least one invariance comprises at least one of scale invariance and translation invariance.
(Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) –Examiner’s note: high level recitation of training a machine learning model with previously determined data).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 8, 14-15 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beaufays et al. (“Beaufays” 20220270590 A1)in view of Podder et al. (“Podder” 20210166080 A1 provisional 62/942538), Yan et al. (“Yan” 20200401716 A1) and Selinger et al. (“Selinger” 20180307912 A1).
Claim 1: Beaufays discloses a method for an unsupervised training of a neural network, the method comprises: performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled (Paragraph 68; check stop condition); wherein each training iteration except the last training iteration comprises: processing more than 100,000 media units by the neural network (Paragraphs 41 and 68; instance database, could reasonably have 100000 units in order to meet set threshold) to provide media unit signatures; finding that the stop condition is not reached, and changing multiple neural network weights (Paragraphs 34, change weights during training and 68, determine if condition meet and stop);
Beaufays does not explicitly disclose initializing a neural network…..at least one invariance; Podder is provided because it discloses a machine learning model that determines invariance (Paragraphs 23-27, and 70 (model run and determination of invariance based on test)) and further retrains the model based on testing (Paragraphs 36 and 49). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide training capability based on determination of invariance found in Beaufays. One would have been motivated to provide the functionality because it expands the training evaluation ensuring an optimized model.
Beaufays also does not explicitly disclose wherein the stop condition is related to signatures similarities.
wherein the stop condition is related to similarities between the media unit signatures, wherein the similarities are based on shared features extracted by the neural network
Yan is provided because it discloses an unsupervised training of autoencoder (spatial reduction functionality) that utilizes a stop condition based difference values which is similarity of items (Paragraphs 27), features are encoded from encoder and similarity determined (Paragraph 28-29). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide a stop condition based on an output difference in value (similarity) from shared features in Beaufays. One would have been motivated to provide the functionality because it expands the training evaluation ensuring an optimized model.
Beaufays also may not explicitly disclose comprising max pooling layers and convolutional layers that provide at least one invariance through spatial dimension reduction;
and by mapping features from input dimensions to reduced output dimensions while maintaining the at least one invariance;
Selinger is provided because it discloses a model which utilizes max pooling and convolutional layers, while maintaining invariance (Paragraph 92-97). Further the model provides mapping of features through feature pooling for reduced output, while maintaining invariance (Paragraphs 86; feature pooling, Paragraphs 97 and 99; cresceptron model (feature mapping model) used as embodiment, while extending invariance). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide the different models utilizing invariance in models of Beaufays. One would have been motivated to provide the functionality because it expands the recognition capability through enhanced methods.
Claim 2: Beaufays, Podder, Yan and Selinger disclose a method according to claim 1 wherein the last training iteration comprises processing the vast number of media units by the neural network to provide media unit signatures and finding that the stop condition is reached (Yan: Paragraphs 27-29 and 33; units used to train and compared to label (signatures) to determine difference and if condition meet).
Claim 8: Beaufays, Podder, Yan and Selinger disclose a method according to claim 1 wherein the at least one invariance comprises at least one of scale invariance and translation invariance, the translation being movements inside a media unit inputted to the neural network (Podder: Paragraphs 22-27; translation invariance including contrast (movements) of image (media) provided to neural network and Selinger: Paragraph 100; cluttered scenes).
Claim 14 is similar in scope to claim 1 and therefore rejected under the same rationale.
Claim 15 is similar in scope to claim 2 and therefore rejected under the same rationale.
Claim 20 is similar in scope to claim 8 and therefore rejected under the same rationale.
Claims 4-7 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beaufays et al. (“Beaufays” 20220270590 A1), Podder et al. (“Podder” 20210166080 A1 provisional 62/942538), Yan et al. (“Yan” 20200401716 A1) and Selinger et al. (“Selinger” 20180307912 A1) in further view of Liu et al. (“Liu” 20200143248 A1) and Singh et al. (“Singh” 5983224).
Claim 4: Beaufays, Podder, Yan and Selinger disclose a method according to claim 1, but may not explicitly disclose wherein the each training iteration except the last training iteration comprises: processing the vast number of media units by the neural network to provide the media unit signatures; clustering the media unit signatures to provide clusters of media unit signatures; generating cluster signatures, wherein a cluster signature is indicative of similarities between media unit signatures of the cluster;
wherein the signatures similarities are related to one or more similarities between the cluster signatures.
and changing multiple neural network weights (Beaufays: Paragraph 34 and Yan Paragraph 60);
Liu is provided because it discloses a training method that utilizes clustering of data to determine subset of features for classification label (signatures) (Paragraphs 91-103 retrain through clustering in order to determine labels). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide clustering with the labeling in Beaufays. One would have been motivated to provide the functionality because it expands the training evaluation ensuring an optimized model.
Additionally Singh is provided for finding that the stop condition is not reached;
Singh provides a clustering functionality and determines if a stop/termination condition is meet based distance error calculation (Column 9, Line 50-Column 10, Line 26; min/max distance used to determine termination condition). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide clustering stop conditions with the labeling in Beaufays. One would have been motivated to provide the functionality because it expands the training evaluation ensuring an optimized model.
Claim 5: Beaufays, Podder, Yan, Selinger, Liu and Singh disclose a method according to claim 4 wherein the stop condition is a maximal distance between cluster signatures (Yan: Paragraphs 27-29; label differences can be determined and Singh: Column 9, Line 50-Column 10, Line 26; min/max distance used to determine termination condition).
Claim 6: Beaufays, Podder, Yan, Selinger, Liu and Singh disclose a method according to claim 4 wherein the stop condition is a maximal average distance between cluster signatures (Singh: Column 9, Line 50-Column 10, Line 26; min/max distance used to determine termination condition).
Claim 7: Beaufays, Podder, Yan, Selinger, Liu and Singh disclose a method according to claim 4 wherein the stop condition is an average distance between cluster signatures that exceeds a predefined threshold (Yan: Paragraphs 27-29; label difference level “threshold”, Singh: Column 9, Line 50-Column 10, Line 26; min/max distance used to determine termination condition).
Claim 16 is similar in scope to claim 4 and therefore rejected under the same rationale.
Claim 17 is similar in scope to claim 5 and therefore rejected under the same rationale.
Claim 18 is similar in scope to claim 6 and therefore rejected under the same rationale.
Claim 19 is similar in scope to claim 7 and therefore rejected under the same rationale.
Claims 9-12 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Beaufays et al. (“Beaufays” 20220270590 A1) in view of Yan et al. (“Yan” 20200401716 A1) and Selinger et al. (“Selinger” 20180307912 A1).
Claim 9: Beaufays discloses a method for a semi-supervised training of a neural network, the method comprises: initializing a neural network; performing multiple training iterations until reaching a last training iteration in which a stop condition is fulfilled; wherein each training iteration except the last training iteration comprises: processing a first group of media units and a second group of media units by the neural network to provide first media unit signatures and second media unit signatures Paragraphs 30-34; first media and second media at different conditions or sections); wherein the second group of the media units comprises more than 100,000 media units (Paragraphs 41 and 68; instance database, could reasonably have 100000 units in order to meet set threshold);
finding that the stop condition is not reached, and changing multiple neural network weights (Paragraphs 29-34 and 68-70; weighting adjusted until condition satisfied);
Beaufays may not explicitly disclose wherein the stop condition is related to a relationship between first media unit signatures of one or more sets of the first media units.
Yan is provided because it discloses an unsupervised training that utilizes a stop condition based difference values of items (i.e. similarity) (Paragraphs 24-29, and 60 (adjust parameters)). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide a stop condition based on difference in value (similarity) in Beaufays. One would have been motivated to provide the functionality to expand the training evaluation ensuring an optimized model.
Beaufays also may not explicitly disclose media units captures comprises sets of media units that capture an object at different illumination and translation angle of view conditions;
Selinger is provided because it discloses a model with convolutional layers (Paragraph 92-97). Further the model performs image processing utilizing image sources (Paragraph 90). These image sources include frames with different conditions such as movement and lighting (similar to illumination and angles) (Paragraph 49). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide the different imaging for the models of Beaufays. One would have been motivated to provide the functionality because it expands the recognition capability with robust datasets.
Claim 10: Beaufays, Yan and Selinger disclose a method according to claim 9 wherein the stop condition is that all first media units signatures are equal to each other (Yan: Paragraphs 27-29 and 33; units used to train and compared to label (signatures) to determine difference and if condition meet (equal can be the determined condition)).
Claim 11: Beaufays, Yan and Selinger disclose a method according to claim 9 wherein the stop condition is that all first media units signatures are similar to each other (Yan: Paragraphs 27-29 and 33; units used to train and compared to label (signatures) to determine difference and if condition meet (smaller difference can signal similarity)).
Claim 12: Beaufays, Yan and Selinger disclose a method according to claim 9 wherein the stop condition is indifferent to a relationship between the first media unit signatures (Beaufays: Paragraphs 29-34 and 68-70; weighting adjusted until condition satisfied which is not reference to relationship).
Claim 13 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Beaufays et al. (“Beaufays” 20220270590 A1), Yan et al. (“Yan” 20200401716 A1) and Selinger et al. (“Selinger” 20180307912 A1) in further view of in view of Podder et al. (“Podder” 20210166080 A1 provisional 62/942538).
Claim 13: Beaufays, Yan and Selinger disclose a method according to claim 9 wherein the neural network provides at least one invariance that comprises at least one of scale invariance and translation invariance, the translation being movements inside a media unit inputted to the neural network (Selinger: Paragraph 49 and 100; movement within frames and cluttered scenes have different movement of media). Podder is further provided because it discloses a machine learning model that provides invariance during translation (Podder: Paragraphs 22-27; translation invariance including contrast (movements) within images (media) provided to neural network), and 70 (model run and determination of invariance based on test)) and further retrains the model (Paragraphs 36 and 49). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to use a known technique for a known device and provide translation invariance in Beaufays. One would have been motivated to provide the functionality to expand the training evaluation ensuring an optimized model.
Response to Arguments
Applicant’s remarks have been considered and addressed below.
Regarding the 101, determining a stop condition is a mental process, because what is required in that instance is a comparison. Is a condition meet, the processing is being performed through generic computer components.
Additionally, training a model to reach an optimized level is the general purpose of all training, and does not equate to a technical advancement through a significant technical solution.
Regarding the 103, the inclusion of Selinger addresses the amended claims.
Further, Podder is believed to provide a model with invariance. This is provided in Podder by testing the model to be invariant, and if the invariance is not meet, training is provided to model which will adjust parameters based on the testing.
Last, regarding the shared features extracted, Yan is believed to provide this feature.
Yan provides an autoencoder, the encoder extracts features and comparisons of similarity are made based on the features (Paragraphs 27-29).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
11636161 B1 CHANG ET AL. Column 2, Lines 45-67
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice.
Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD KEATON whose telephone number is 571-270-1697. The examiner can normally be reached 9:30am to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor MICHELLE BECHTOLD can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHERROD L KEATON/ Primary Examiner, Art Unit 2148
11-22-2025