DETAILED ACTION
This Action is responsive to Claims filed 12/04/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered.
Status of the Claims
Claims 1, 8, and 15 have been amended. Claims 1-20 are currently pending.
Drawings
Receipt of the Drawings filed 12/04/2025 is acknowledged. These Drawings are acceptable.
Response to Arguments
Applicant’s arguments, see Pages 11-16, filed 12/04/2025, regarding the Prior Art rejection(s) of claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding the Applicant’s arguments pertaining to the structure of the model(s) of the cited references: The Applicant argues a distinction between the instant Application and the cited references Gan and/or Sun. The Examiner respectfully reminds the Applicant that the Specification cannot be read into the claims. The Examiner submits there is insufficient detail pointing toward the implementation of the target model so as to preclude the dual network structure of Gan or Sun. Gan and/or Sun continues to read on the generic recitation of a “model,” the Examiner contends, under the broadest reasonable interpretation. The generic recitation of “model” alone does not preclude said model from containing multiple network branches.
Similarly, the mere recitation of the word “unified” does not convey the level of implemented detail the Applicant argues. The Examiner submits that, pending further, more substantive, amendments pertaining to the structure or implementation of said feature extraction, the mapping found in the previous Office Action reasonably reads on the claimed limitations.
Similarly, given the claims do not recite such specific implementation so as to preclude multiple network branches, the Examiner contends the ensemble voting of Dietterich remains relevant, and continues to read on the generic recitation of “voting processing.” The Examiner also submits, should more substantive amendments to the model(s) architecture be submitted, a cursory search indicates KNN-based, hard, majority voting on a single model’s candidate outputs would have been a known technique before the time of the Applicant’s filing. See the updated Prior Art Rejections below.
Applicant's arguments, see Pages 16-21, filed 12/04/2025, regarding the 35 U.S.C. 101 Rejection of Claims 1-20 have been fully considered but they are not persuasive.
As presently drafted, under the broadest reasonable interpretation of the Claims, the independent Claims do not recite specific implementation or detail precluding the interpretable abstract idea mental process steps from being performed by a human mind with or without the aid of pen and paper. The generic recitation of source and target models are not specifically limited to computer implementation (a “model” may be a set of equations, for example), nor is the claimed method limited to computer-implementation. Therefore, the “generating…”, “calculating…”, “calculating…”, adjusting…”, “identifying…”, “performing…”, and “classifying…” steps are not limited to computer implementation and do not claim any kind of structural detail precluding a human mind from performing them. The “receiving…”, “inputting…”, “returning…”, and “storing…” limitations do not recite particular implementation or structure differentiating them from generic computer components performing generic computing functions, or instructions to apply a generic model onto a sample set obtained by an aforementioned abstract ide amental process step.
The initialization of the source model is recited highly generally. The formulation of the target model is an algorithmic set of mental process steps. The identification of sample sets is interpretable as an abstract idea mental process step. The voting after the generic model has executed is interpretable as an abstract idea mental process step. The classification of the image is interpretable as an abstract idea mental process step. As presently drafted, without significant recitation of computer-related implementation and detail, the Examiner contends the independent claims as presently drafted are not eligible subject matter, and a specific improvement derived from these steps is necessarily rooted, currently, in the abstract idea mental process steps. See the updated 35 U.S.C. 101 Rejection below.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1:
Claims 1-7 are directed towards a method (process); Claims 8-14 are directed towards a non-transitory computer-readable storage medium (manufacture); and Claims 15-20 are directed towards a device (manufacture).
Step 2A – Prong 1:
Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “generating a target domain model using a domain adaptation training process”, “calculating a first loss between feature vectors of a source domain model and target domain model;”, “calculating a second loss between covariance matrices of designated intermediate layers of the source domain model and target domain model, the designated intermediate layers comprising feature maps from designated convolutional layers;”, “adjusting the initial model parameters based on a weighted combination of the first loss and second loss to obtain target model parameters;”, “identifying sample sets of commodities of a plurality of categories the sample sets comprising representative images of commodities in each category;”, “performing voting processing on the plurality of candidate prediction results based on the feature similarity scores to select a category having a highest number of votes;”, and “classifying the image under search into a sample set corresponding to the predicted category;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper.
Step 2A – Prong 2:
The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “a computing device”, “an image dataset”, and “a database” which are recognized as a generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements of “a target domain model”, “a domain adaption training process”, “a…loss”, “feature vectors”, “covariance matrices”, “convolutional layers”, and “commodities” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)).
The additional elements of “using a domain adaptation training process” and “initializing the target domain model” are nothing more than merely an instruction to ‘apply it’ a process to a generic computer, using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). Accordingly, the additional elements do not integrate the judicial exception into a practical application.
The additional elements of ““inputting the image under search and the identified sample sets of commodities…and executing unified feature extraction…” amount to instructions to apply the aforementioned abstract idea mental process steps of formulating the target model and curating input data (See MPEP 2106.05(f)).
The additional elements recited in the limitations of “receiving and image…”, “Returning a prediction result…”, and “storing…” are simply adding insignificant extra-solution activity to the judicial exception related to mere data gathering and output, as discussed in MPEP § 2106.05(g), and therefore does not integrate the judicial exception into a practical application.
Step 2B:
The additional elements of “using a domain adaptation training process” and “initializing the target domain model” are nothing more than merely an instruction to ‘apply it’ a process to a generic computer, using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). Accordingly, the additional elements do not integrate the judicial exception into a practical application.
The additional elements of ““inputting the image under search and the identified sample sets of commodities…and executing unified feature extraction…” amount to instructions to apply the aforementioned abstract idea mental process steps of formulating the target model and curating input data (See MPEP 2106.05(f)).
“receiving and image…”, “Returning a prediction result…”, and “storing…” are routine and conventional activities analogous to receiving or transmitting data over a network does not amount to significantly more than the judicial exception. MPEP 2106.05(d)(II)(i).
Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 8 and 15.
Claim 8 recites similar limitations to claim 1, with the inclusion of additional elements “A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a computer processor, the computer program instructions defining the steps of…” (generic computer components)
Claim 15 recites similar limitations to claim 1, with the inclusion of additional elements “A device comprising: a processor; and a storage medium for tangibly storing thereon program logic for execution by the processor, the stored program logic comprising: logic, executed by the processor…” (generic computer components).
Dependent Claims:
Claim 2 (claims 9 and 16) the additional element of “…wherein at least one of the at least two network models corresponds to multiple commodity categories…” has been evaluated under step 2A Prong 2 and Step 2B and generally links the abstract idea to a particular technology or field of use (MPEP 2106/05(h)).
Claim 3 (claims 10 and 17) the additional element of “…wherein at least one of the at least two network models corresponds to a same commodity category…” generally links the abstract idea to a particular technology or field of use (MPEP 2106/05(h)).
Claim 4 (claims 11 and 18) recites an abstract idea mental process step (“adjusting…”). The additional elements of “initializing…” and “inputting…” have been evaluated under step 2A Prong 2 and Step 2B and found to be instructions to apply and mere data-gathering or data transmittal steps, respectively (MPEP 2106.05(f) for instructions to apply, MPEP 2106.05(d)(ll)(i) for data transmittal).
Claim 5 (claims 12 and 19) recites an abstract idea mathematical concept or relationship steps (calculating…”). The additional elements of “inputting…” and “acquiring…” have data-gathering been evaluated under step 2A Prong 2 and Step 2B and found to be mere data-gathering or data transmittal steps (See MPEP 2106.05(d)(ll)(i)).
Claim 6 (claims 13 and 20) recites an abstract idea mathematical concept or relationship steps (calculating…”). The additional element “acquiring…” has been evaluated under step 2A Prong 2 and Step 2B and found to be mere data-gathering or data transmittal steps (See MPEP 2106.05(d)(ll)(i)).
Claim 7 (claim 14) recites an abstract idea mathematical concept or relationship steps (calculating…”). The additional element “obtaining…” has been evaluated under step 2A Prong 2 and Step 2B and found to be mere data-gathering or data transmittal steps (See MPEP 2106.05(d)(ll)(i)).
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under US Patent Publication 2021/0089872 to Gan (hereinafter "Gan"); US Patent Publication 2017/014944 to Csurka (hereinafter "Csurka"); Deep CORAL: Correlation Alignment for Deep Domain Adaptation by Baochen Sun and Kate Saenko, arXiv: 1607.01719v1, July 6, 2016, (hereinafter "Sun"); Thomas Dietterich (Ensemble Methods in Machine Learning, 2000), hereinafter Dietterich; and Krishnakumar et al. (US 10,095,950 B2), hereinafter Krishnakumar.
In regards to claim 1: “A method comprising: generating a target domain model (Gan, Fig. 1, target domain 30b) using a domain adaptation training process, the domain adaptation training process comprising initializing the target domain model (Gan, Fig. 3, target encoder 34) using a model pre-trained by an image dataset to obtain initial model parameters; (Gan [0064] "Thus, for example, the target encoder 34 may be initialized to be identical to the pre-trained source encoder 32 and the target discriminator 38 may be initialized to be identical to the initialized (e.g. both randomly initialized) source discriminator 36. ") …a source domain model (Gan, Fig. 1, Source domain 31a) and target domain model; receiving an image under search from a computing device; (Gan, Fig. 1B shows trained image of encoder for receiving an input image, Gan [0102] describes receiving image input);
Gan does not disclose:
calculating a first loss between feature vectors of a source domain model (Gan, Fig. 1, Source domain 31a) and target domain model;
calculating a second loss between covariance matrices of designated intermediate layers of the source domain model and target domain model;
and adjusting the initial model parameters based on a weighted combination of the first loss and second loss to obtain target model parameters; Although Gan does teach part of the model being updated/adapted (Gan [0069] describes The pre-trained source encoder 32 is not further trained. D.sub.S, D.sub.T and E.sub.T will be updated in the adaptation process while E.sub.S is pre-trained and fixed.)
identifying sample sets of commodities of a plurality of categories the sample sets comprising representative images of commodities in each category;
inputting the image under search and the identified sample sets of commodities of the plurality of categories to the target domain model and
executing unified feature extraction processing on both the image under search and the sample sets to generate a plurality of candidate prediction results, wherein each candidate prediction result comprises a commodity category and a feature similarity score;
performing voting processing on the plurality of candidate prediction results based on the feature similarity scores to select a category having a highest number of votes;
and returning a prediction result output by the target domain model to the computing device.”
Csurka, in the same field of endeavor, discloses:
identifying sample sets of commodities (Csurka Fig. 2, S102) of a plurality of categories (source class representations 32, 34), the sample sets comprising representative images of commodities in each category;
inputting the image under search (input target samples 40) and the identified sample sets of commodities (source class representations 32, 34) to the target domain model ((Fig. 2, S106, S108} 11[0035] the input target samples 40 and class representations 32, 34 together form an input set 46 of representations which are jointly adapted, by the system, to form a corresponding output set 47 of representations); and
returning a prediction result output by the target domain model to the computing device (Fig. 2, S110, 11[0038] A labeling component 60 applies a label to the target sample 40, based on the dassifler component output. The dass with the highest probability can be assigned as the label of the target sample).
It would have been obvious to one of ordinary skill in the art before the date of the claimed invention to have combined the multi-source network training method of Gan with the prediction process of Csurka because it is merely combining prior art according to known methods to yield predictable results. One would be motivated to make such a combination because, as described at [0108] of Gan, it allows the developer to easily adapt the model to the target user populations.
The combination of Gan and Csurka fails to teach:
calculating a first loss between feature vectors of a source domain model (Gan, Fig. 1, Source domain 31a) and target domain model;
calculating a second loss between covariance matrices of designated intermediate layers of the source domain model and target domain model, the designated intermediate layers comprising feature maps from designated convolutional layers;
and adjusting the initial model parameters based on a weighted combination of the first loss and second loss to obtain target model parameters; Although Gan does teach part of the model being updated/adapted (Gan [0069] describes The pre-trained source encoder 32 is not further trained. D.sub.S, D.sub.T and E.sub.T will be updated in the adaptation process while E.sub.S is pre-trained and fixed.)
executing unified feature extraction processing on both the image under search and the sample sets to generate a plurality of candidate prediction results, wherein each candidate prediction result comprises a commodity category and a feature similarity score;
performing voting processing on the plurality of candidate prediction results based on the feature similarity scores to select a category having a highest number of votes;
Sun, in the same field of endeavor, discloses a system that considers both classification loss and CORAL loss. (Fig. 1)
Accordingly, Sun teaches wherein inputting the sample image data into the source domain model and the target domain model respectively to obtain the calculation result comprises:
calculating a first loss (classification loss) between feature vectors of a source domain model (Source fc8) and target domain model; (Target fc8) (Fig. 1 illustrates calculating a classification loss for feature vectors fc8, the classification loss interpreted as the distance between the first feature vector fc8 of the source and the second vector fc8 of the target);
calculating a second loss (CORAL loss) between covariance matrices of designated intermediate layers of the source domain model (Cs) and target domain model, the designated intermediate layers comprising feature maps from designated convolutional layers; (Cr) (Fig. 1 illustrates calculating the CORAL loss. At section 3.1, Sun describes the CORAL loss in Eq (1) as the distance between source and target covariance matrices)
and adjusting the initial model parameters based on a weighted combination of the first loss and second loss to obtain target model parameters; Gan teaches (Gan [0069] describes The pre-trained source encoder 32 is not further trained. D.sub.S, D.sub.T and E.sub.T will be updated in the adaptation process while E.sub.S is pre-trained and fixed.) and Sun, page 4, describes that joint training with the classification loss and CORAL loss is likely to learn features that work well on the target domain, as well as the use of the weighted combination of the losses (Equation 6).
It would have been obvious to one of ordinary skill in the art before the date of the claimed invention to have combined the Deep CORAL domain adaptation process of Sun into the neural network training of Gan and Csurka because it is merely combining prior art according to known methods to yield predictable results. One would be motivated to make such a combination because, as described at page 6 of Sun, it is a simple yet effective, high performance, domain adaptation method that works seamlessly with deep networks and can be easily integrated into different layers of network architectures.
The combination of Gan, Csurka, and Sun teaches:
executing unified feature extraction processing on both the image under search and the sample sets to generate a plurality of candidate prediction results, wherein each candidate prediction result comprises a commodity category and a feature similarity score; In addition to Sun extracting features and calculating a feature vector loss, Gan teaches “In particular, the control module 96 may determine a pairwise divergence (T 2 , S) or (T 2 , T 1) between the target domain (ie T2) and the candidate domains (Sand Tl), and identify the pair with the smallest divergence.” ([0122]). It would have been obvious to one of ordinary skill in the art to output multiple candidate predictions before settling on a best fit.
performing voting processing on the plurality of candidate prediction results based on the feature similarity scores to select a category having a highest number of votes; While the combination of Gan, Csurka, and Sun fails to explicitly teach a hard voting process amongst candidate predictions, however Gan does teach “Note that in other example embodiments, other distance metrics (such as MMD (Maximum Mean Discrepancy), KL Divergence, or Logistic Loss) may be used. The skilled person will be aware of alternative metrics, not discussed herein, that could be used.”
Dietterich, in the same field of endeavor, teaches majority voting on prediction results throughout their disclosure (Abstract, Introduction: Page 2, at least).
A person skilled in the art at the time of the applicant’s filing would be aware of hard (or soft, as is used in the aforementioned references), voting as used in Dietterich to choose a suitable candidate output, especially if voting on a result from one or more models was known at the time of Gan and/or Dietterich’s writing.
The combination of Gan, Csurka, Sun, and Dietterich teaches:
classifying the image under search into a sample set corresponding to the predicted category (Gan [0049], [0119], at least; Csurka’s Abstract discusses predicting a class label output (mapping to classifying into a predicted category); Fig. 1 of Sun shows a classification loss, at least).
The combination of Gan, Csurka, Sun, and Dietterich fails to teach:
storing the image under search in a database associated with the predicted category for subsequent retrieval. (Although Csurka Column 4, Lines 20-24 teaches: There are several benefits of the exemplary system and method. For example, class representations, such as class means, for different sources can be precomputed and stored so that when predicting class labels for a target domain, the precomputed class means can be retrieved.) However, Krishnakumar, in a similar field of endeavor of image search, teaches “As shown in FIG. 6, the system for similarity matching, i.e. for providing images similar to an query image from within a set of images, comprises an input/output module 602 connected to a base classifier 604 further connected to a reduction module 606 and comparison module 608, all connected to a central database 610. The 1/0 module 602 is configured to receive a query image, wherein the query image may be received from a user input or as part of a request made by another system. The I/O module 602 is also configured to store the query image into the central database 610 and provide the same to the base classifier 604, wherein the base classifier 604 comprises of a deep convolutional neural network (DCNN).” (Column 6, Lines 26-39). See also Column 7, Lines 18-34 for more detail on the database and what is stored therein.
Krishnakumar highlights the importance of image similarity comparison in image search and the need for scalable solutions (Background). Given the above cited references also offer limited details regarding the storage of classification results, it would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to store classified images and/or data pertaining to them in a central or accessible database to improve the scalability of the image search.
In regards to claim 2: The combination of Gan with Csurka discloses the method of claim 1, wherein at least one of the at least two network models corresponds to multiple commodity categories. (Gan at [0104] discloses that a source encoder 32, that analyses
human speech to infer the emotion of the speaker, interpreting 'multiple commodity
categories' as analogous to different emotions).
In regards to claim 3: The combination of Gan with Csurka discloses the method of claim 1. Gan further discloses wherein the at least two network models correspond to a same commodity category ([0106] Gan describes adapting a model for population groups in each target country. [0113] Gan describes adapting models to recognize the same emotions, but which are adapted for particular countries - candidate source encoders may include an encoder of the original algorithm (e.g. trained based on US speakers) and other source encoders (e.g. based on Chinese, German, Swiss and Indian speakers)).
In regards to claim 4: The combination of Gan with Csurka discloses the method of claim 1. Gan further discloses: wherein generating a target domain model comprises: initializing the target domain model (Gan: target encoder 34) using a model pretrained by an image data set to obtain initial model parameters; (Gan [0064] "Thus, for example, the target encoder 34 may be initialized to be identical to the pre-trained source encoder 32 and the target discriminator 38 may be initialized to be identical to the initialized (e.g. both randomly initialized) source discriminator 36.").
inputting sample image data into the source domain model and the target domain model respectively to obtain a calculation result of a loss function between the source domain model and the target domain model (Gan describes: [0084] At operation 63, the relevant weights are updated. As described above, the source and target discriminators 36 and 38 are trained to minimise the respective loss functions of those discriminators) , the loss function controlling a distance between the source domain model and the target domain model in the same feature space; (Gan describes at [0126] The candidate selection technique may be based on computing divergences or distances between different domains.)
and adjusting the initial model parameters based on the calculation result to obtain target model parameters (Gan 11[0069] describes The pre-trained source encoder 32 is not further trained. D.sub.S, D.sub.T and E.sub.T will be updated in the adaptation process while E.sub.S is pre-trained and fixed.)
In regards to claim 5: A combination of Gan, Csurka, and Sun discloses the method of claim 1. Sun teaches wherein inputting the sample image data into the source domain model and the target domain model respectively to obtain the calculation result comprises:
calculating a distance between a first feature vector (Source fc8) and a second feature vector (Target fc8) to obtain a first intermediate result (classification loss), the first feature vector generated after inputting the sample image data into a network model of a corresponding category in the source domain model, and the second feature vector generated after inputting the sample image data into the target domain model; (Fig. 1 illustrates calculating a classification loss for feature vectors fc8, the classification loss interpreted as the distance between the first feature vector fc8 of the source and the second vector fc8 of the target);
calculating a distance between a first covariance matrix (Cs) and a second covariance matrix (Cr)to obtain a second intermediate result (CORAL loss), the first covariance matrix comprising a covariance matrix of designated intermediate layer features in the source domain model, and the second covariance matrix comprising a covariance matrix of designated intermediate layer features in the target domain model; (Fig. 1 illustrates calculating the CORAL loss. At section 3.1, Sun describes the CORAL loss in Eq (1) as the distance between source and target covariance matrices) and
acquiring the calculation result based on the first intermediate result and the second intermediate result. (Sun, page 4, describes that joint training with the classification loss and CORAL loss is likely to learn features that work well on the target domain).
In regards to claim 6: As described above, the combination of Gan in view of Csurka and Sun discloses the method of claim 5. Sun further discloses that the calculating the distance between the first covariance matrix and the second covariance matrix to obtain the second intermediate result comprising:
acquiring the first covariance matrix from a designated intermediate layer (single feature layer) of a network model of a category corresponding to the sample image data, and acquiring the second covariance matrix from a designated intermediate layer (single feature layer) of the target domain model; and (Sun, 3.1, describes determining a CORAL loss for a single feature layer);
calculating and obtaining the second intermediate result using the first covariance matrix, the second covariance matrix, and a feature dimension of the designated intermediate layer. (Equation (1) illustrates that the dimension dis part of the covariance matrix loss determination).
In regards to claim 7: As described above, the combination of Gan in view of Csurka and Sun discloses the method of claim 5. Sun further discloses obtaining the calculation result based on the first intermediate result and the second intermediate result comprising:
calculating a product of a preset scale factor (λ) and the second intermediate result (Lcoral); and
calculating a sum of the first intermediate result (Lc1ass) and the product to obtain the calculation result. (Section 3.2, Equation (6) teaches calculating a sum of the first intermediate result and the product)
In regards to claims 8-14: Claims 8-14 recite similar limitations to claims 1-7, with the exception of “A non-transitory computer-readable storage medium for tangibly storing computer program instructions capable of being executed by a computer processor, the computer program instructions defining the steps of:” (Gan, Fig. 11, 304) therefore, both sets of claims are similarly rejected.
In regards to claims 15-20: Claims 15-20 recite similar limitations to claims 1-7, with the exception of “A device comprising: a processor; and a storage medium for tangibly storing thereon program logic for execution by the processor, the stored program logic comprising:” (Gan, Fig. 11, 300) therefore both sets of claims are similarly rejected.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRIFFIN TANNER BEAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121