DETAILED ACTION Claim Status This is first office action on the merits in response to the application filed on 8 / 16 /20 23 . The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 8/16/2023 is(are) in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under the Step 1 of the Section 101 analysis, Claims 1-11 and 14-20 are drawn to a device which is within the four statutory categories (i.e. a machine), Claim 12 is drawn to a method which is within the four statutory categories (i.e., a process), and Claim 13 is drawn to a non-transitory computer-readable medium which is within the four statutory categories (i.e., a manufacture). Since the claims are directed toward statutory categories, it must be determined if the claims are directed towards a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). Based on consideration of all of the relevant factors with respect to the claim as a whole, claims 1-20 are determined to be directed to an abstract idea. The rationale for this determination is explained below: Regarding Claims 1 and 12-13 : Claims 1 and 12-13 are drawn to an abstract idea without significantly more. The claims recite “ a data acquisition unit that acquires target data ; an inference unit that performs an inference task on the acquired target data by using an inference model trained by machine learning ; and an output unit that outputs information on a result of performing the inference task, wherein at least some of a plurality of parameters of the inference model are represented by a matrix, the matrix includes a first submatrix and a second submatrix, the number of elements in each row and each column of the first submatrix and the second submatrix are the same, and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix .” Under the Step 2A Prong One, the limitations, as underlined above, are processes that, under its broadest reasonable interpretation, cover Mathematical Concepts such as mathematical relationships, mathematical formulas or equations, or mathematical calculations . For example, but for the “ unit ” , “ machine learning ”, and “ inference model ” language, the underlined limitations in the context of this claim encompass the mathematical concepts . The series of steps belong to a typical mathematical calculations . Under the Step 2A Prong Two, this judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “ An inference device comprising: ”, “ An inference method of causing a computer to execute the following steps comprising: ”, “ A non-transitory computer storage media that stores an inference program causing a computer to execute the following steps:”, “unit”, “machine learning”, and “inference model” . The additional elements are recited at a high-level of generality (i.e., performing generic functions of an interaction) such that it amounts no more than mere instructions to apply the exception using a generic computer component, merely implementing an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea . Additionally, regarding the specification and claims, there is no improvement in the functioning of a computer or an improvement to other technology or technical field present, there is no applying or using the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition present, there is no implementing the judicial exception with or using the judicial exception in conjunction with a particular machine or manufacture that is integral to the claim present, there is no effecting a transformation or reduction of a particular article to a different state or thing present, and there is no applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment present such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Accordingly, these additional elements, individually or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the process amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding Claim 9 : Claim 9 is drawn to an abstract idea without significantly more. The claims recite “ a data acquisition unit that acquires a plurality of learning data sets each constituted by a combination of training data and a correct answer label indicating a correct answer of an inference task for the training data ; and a learning processing unit that performs machine learning of an inference model by using the plurality of learning data sets, the learning processing unit being configured such that at least some of a plurality of parameters of the inference model are represented by a matrix, the matrix includes a first submatrix and a second submatrix, the number of elements in each row and each column of the first submatrix and the second submatrix are the same, and the machine learning is performed for each of the learning data sets by training the inference model so that a result of performing the inference task on the training data by using the inference model matches a correct answer indicated by the correct answer label and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix .” Under the Step 2A Prong One, the limitations, as underlined above, are processes that, under its broadest reasonable interpretation, cover Mathematical Concepts such as mathematical relationships, mathematical formulas or equations, or mathematical calculations . For example, but for the “unit”, “machine learning”, and “inference model” language, the underlined limitations in the context of this claim encompass the mathematical concepts . The series of steps belong to a typical mathematical calculations . Under the Step 2A Prong Two, this judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “ A model generation device comprising: ”, “unit”, “machine learning”, and “inference model” . The additional elements are recited at a high-level of generality (i.e., performing generic functions of an interaction) such that it amounts no more than mere instructions to apply the exception using a generic computer component, merely implementing an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea . Additionally, regarding the specification and claims, there is no improvement in the functioning of a computer or an improvement to other technology or technical field present, there is no applying or using the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition present, there is no implementing the judicial exception with or using the judicial exception in conjunction with a particular machine or manufacture that is integral to the claim present, there is no effecting a transformation or reduction of a particular article to a different state or thing present, and there is no applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment present such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Accordingly, these additional elements, individually or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the process amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding Claims 2-8, 10-11, and 14-20 : Dependent claims 2-6, 8, 10, 14-15, and 19-20 only further elaborate the abstract idea and do not recite additional elements. Dependent claims 7, 11, and 16-18 include additional limitations, for example, “ inference model ”, “ neural network ”, and “neurons” (Claims 7 and 16-18); and “inference model”, “neural network”, “neurons” , and “computation” (Claim 11) , but none of these limitations are deemed significantly more than the abstract idea because, as stated above, they require no more than generic computer structures or signals to be executed, and do not recite any Improvements to the functioning of a computer, or Improvements to any other technology or technical field. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation or implementing the judicial exception on a generic computer. Therefore, whether taken individually or as an ordered combination, claims 2-8, 10-11, and 14-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1- 2, 4, 6- 10 , 12- 13, 16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (US 20180046914 A1) in view of Motoya (US 20180276527 A1) . Regarding Claims 1 and 12-13 , Li teaches An inference device comprising ( Li : Abstract; Paragraph(s) 0220 ) : An inference method of causing a computer to execute the following steps comprising ( Li : Abstract; Paragraph(s) 0220 ) : A non-transitory computer storage media that stores an inference program causing a computer to execute the following steps ( Li : Abstract; Paragraph(s) 0220 , 0100 ) : a data acquisition unit that acquires target data ( Li : Paragraph(s) 0026, 0146 teach(es) In the model shown in FIG. 4, it involves computing acoustic output probability using a deep learning model. That is, conducting similarity prediction between a series of input speech signals and various possible candidates ) ; an inference unit that performs an inference task on the acquired target data by using an inference model trained by machine learning ( Li : Paragraph(s) 0220 teach(es) Efficient Inference Engine on Compressed Deep Neural Network ) ; and an output unit that outputs information on a result of performing the inference task ( Li : Paragraph(s) 0080 teach(es) The accuracy of ANN can be measured by, for example, inputting a benchmark test data to the ANN and decide how accurate the prediction results of said ANN is ) , wherein at least some of a plurality of [ parameters ] of the inference model are represented by a matrix, the matrix includes a first submatrix and a second submatrix ( Li : Paragraph(s) 0 219-0220 teach(es) a submatrix consisting of the selected rows according to a specific sparse matrix storage format. Here, true value, relative row index and column pointer vectors are used to represent the original sparse matrix ; Efficient Inference Engine on Compressed Deep Neural Network ) , the number of elements in each row and each column of the first submatrix and the second submatrix are the same ( Li : Paragraph(s) 0208, 0199 teach(es) it divides a dense matrix by regularly extracting one row out of every N rows, so as to form N submatrices of same size ; dividing a dense matrix into a plurality of submatrices of similar size before compression ) . However, Li does not explicitly teach a plurality of parameters of the inference model , and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix . Motoya from same or similar field of endeavor teaches a plurality of parameters of the inference model ( Motoya: Paragraph(s) 0008, 0025 t each(es) a convolutional neural network learning method for determining a matrix data calculation parameter of a convolution calculation of the convolutional neural network ; A first half (UkSkVkT), which is a product of the left orthogonal matrix first half Uk 135, diagonal matrix first half Sk, and right orthogonal matrix first half Vk T, is set as matrix data used in the convolution calculation conv1 of the first half, and a second half (U(n−k)S(n−k)V(n−k)T), which is a product of the left orthogonal matrix second half U(n−k), diagonal matrix second half S(n−k) 138, and right orthogonal matrix second half V(n−k)T, is set as matrix data used in the convolution calculation conv1 of the second half ), and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix ( Motoya: Paragraph(s) 0040-0043 , 0051-0054 teach(es) the matrix data A 131 is decomposed, by a singular value decomposition, into three matrix products which are mathematically relevant ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Li to incorporate the teachings of Motoya for a plurality of parameters of the inference model, and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix . There is motivation to combine Motoya into Li because Motoya ’s teachings of parameters of the inference model and matrix products would facilitate to use ne u ral network ( Motoya : Paragraph(s) 0008, 0025 , 0040-0043 ). Regarding Claim 9 , Li teaches A model generation device comprising ( Li : Abstract; Paragraph(s) 0220 ) : a data acquisition unit that acquires a plurality of learning data sets each constituted by a combination of training data and a correct answer label indicating a correct answer of an inference task for the training data ( Li : Paragraph(s) 0026, 0146 , 0080, 0168-0171 teach(es) In the model shown in FIG. 4, it involves computing acoustic output probability using a deep learning model. That is, conducting similarity prediction between a series of input speech signals and various possible candidates ; Iterative training is divided into several “epochs”, wherein an epoch (hereinafter referred to as “one iteration”) means that all data in the training dataset has been run for once, and the total number of iterations shall not be more than max_iters or less than min_iters ) ; and a learning processing unit that performs machine learning of an inference model by using the plurality of learning data sets ( Li : Paragraph(s) 0220 teach(es) Efficient Inference Engine on Compressed Deep Neural Network ) , the learning processing unit being configured such that at least some of a plurality of [ parameters ] of the inference model are represented by a matrix, the matrix includes a first submatrix and a second submatrix ( Li: Paragraph(s) 0219-0220 teach(es) a submatrix consisting of the selected rows according to a specific sparse matrix storage format. Here, true value, relative row index and column pointer vectors are used to represent the original sparse matrix; Efficient Inference Engine on Compressed Deep Neural Network) , the number of elements in each row and each column of the first submatrix and the second submatrix are the same ( Li : Paragraph(s) 0208, 0199 teach(es) it divides a dense matrix by regularly extracting one row out of every N rows, so as to form N submatrices of same size; dividing a dense matrix into a plurality of submatrices of similar size before compression ) . However, Li does not explicitly teach a plurality of parameters of the inference model , and the machine learning is performed for each of the learning data sets by training the inference model so that a result of performing the inference task on the training data by using the inference model matches a correct answer indicated by the correct answer label and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix . Motoya from same or similar field of endeavor teaches a plurality of parameters of the inference model (Motoya: Paragraph(s) 0008, 0025 teach(es) a convolutional neural network learning method for determining a matrix data calculation parameter of a convolution calculation of the convolutional neural network; A first half (UkSkVkT), which is a product of the left orthogonal matrix first half Uk 135, diagonal matrix first half Sk, and right orthogonal matrix first half Vk T, is set as matrix data used in the convolution calculation conv1 of the first half, and a second half (U(n−k)S(n−k)V(n−k)T), which is a product of the left orthogonal matrix second half U(n−k), diagonal matrix second half S(n−k) 138, and right orthogonal matrix second half V(n−k)T, is set as matrix data used in the convolution calculation conv1 of the second half), and the machine learning is performed for each of the learning data sets by training the inference model so that a result of performing the inference task on the training data by using the inference model matches a correct answer indicated by the correct answer label ( Li: Paragraph(s) 0142-0143 teach(es) by using an image data set for training data, a learning algorithm for the convolutional neural network is activated by a learning device of the convolutional neural network ) and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix ( Motoya: Paragraph(s) 0040-0043 teach(es) the matrix data A 131 is decomposed, by a singular value decomposition, into three matrix products which are mathematically relevant). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Li to incorporate the teachings of Motoya for a plurality of parameters of the inference model, and the machine learning is performed for each of the learning data sets by training the inference model so that a result of performing the inference task on the training data by using the inference model matches a correct answer indicated by the correct answer label and a value of each element of the second submatrix is adjusted to match a product of the first submatrix and a diagonal matrix . There is motivation to combine Motoya into Li because Motoya’s teachings of parameters of the inference model and matrix products would facilitate to use ne u ral network ( Motoya : Paragraph(s) 0008, 0025, 0040-0043 , 0142-0143 ). Regarding Claim s 2 and 10 , the combination of Li and Motoya teaches all the limitations of claim s 1 and 9 above; and Li further teaches wherein, in at least a portion of the matrix, a scaling relationship is established ( Li: Paragraph(s) 0009 teach(es) Some of the advanced neural network models might have hundreds of layers and billions of connections, and the implementation thereof is both calculation-centric and memory-centric. Since neural networks are becoming larger, it is critical to compress neural network models into smaller scale ) such that at least a portion of the matrix is divided into MxN submatrices so that submatrices having the same number of elements in each row and each column are arranged in M rows and N columns, the submatrix disposed in any one row constitutes the first submatrix for submatrices disposed in rows other than the row in each column, and the submatrices disposed in the other rows constitute the second submatrix ( Li : Paragraph(s) 0051-0055 , 0207-0208 teach(es) dividing step, for dividing at least one of said plurality of matrices into a plurality of submatrices; compression step, for compressing the submatrices into sparse submatrices; and encoding step, for encoding the compressed sparse submatrices ; each divided submatrix needs to be of same (or, similar) size and has similar number of non-zero elements ). Regarding Claim 4 , the combination of Li and Motoya teaches all the limitations of claim 2 above; and Li further teaches wherein the scaling relationship is recursively established within at least a portion of the matrix by repeating the establishment of the scaling relationship within the submatrices that constitute the first submatrix ( Li : Paragraph(s) 0009 , 0208 teach(es) Some of the advanced neural network models might have hundreds of layers and billions of connections, and the implementation thereof is both calculation-centric and memory-centric. Since neural networks are becoming larger, it is critical to compress neural network models into smaller scale ; it divides a dense matrix by regularly extracting one row out of every N rows, so as to form N submatrices of same size ). Regarding Claim 6 , the combination of Li and Motoya teaches all the limitations of claim 2 above; however the combination does not explicitly teach wherein M and N are each 2 . Motoya further teaches wherein M and N are each 2 ( Motoya : Paragraph(s) 0138 teach(es) the matrix data of the convolution calculation is a square matrix, that is, when n=m, an eigenvalue decomposition may be performed other than a singular value decomposition ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Li and Motoya to incorporate the teachings of Motoya for wherein M and N are each 2 . There is motivation to combine Motoya into the combination of Li and Motoya because Motoya’s teachings of parameters of square matrix would facilitate to use ne u ral network ( Motoya : Paragraph(s) 0008, 0025, 0040-0043 ). Regarding Claims 7 , 1 6 , and 18 , the combination of Li and Motoya teaches all the limitations of claim s 1, 2, and 4 above; and Li further teaches wherein the inference model is constituted by a neural network, and each element of the matrix is configured to correspond to a weight of connection between neurons in the neural network ( Li : Paragraph(s) 0005, 0051-0055 teach(es) I n neural networks, there exists a large number of nodes (also called neurons) which are connected to each other. Neural networks have two features: 1) Each neuron calculates the weighted input values from other adjacent neurons via certain output function (also called Activation Function); 2) The information transmission intensity between neurons is measured by so-called weights, and such weights might be adjusted by self-learning of certain algorithms ). Regarding Claims 8 and 1 9 , the combination of Li and Motoya teaches all the limitations of claim s 1 and 2 above; however the combination does not explicitly teach wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect . Motoya further teaches wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect ( Motoya : Paragraph(s) 0037, 0046 teach(es) a vector data which is a part of the middle layer by applying the matrix vector product of the first half to the vector data which is a part of the image data ; to maintain a level of accuracy so that the maximum value can be detected in the subsequent pooling calculation pool according to the present embodiment ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Li and Motoya to incorporate the teachings of Motoya for wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect . There is motivation to combine Motoya into the combination of Li and Motoya because Motoya’s teachings of processing of image data would facilitate to use neural network ( Motoya : Paragraph(s) 0037, 0046 ). Claim(s) 3, 5, 14-15, 17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Motoya , as applied to claim 2 above, and in further view of Penumarthi (US 20210304037 A1) . Regarding Claim 3 , the combination of Li and Motoya teaches all the limitations of claim 2 above; however the combination does not explicitly teach wherein M and N are the same prime number S . Penumarthi from same or similar field of endeavor teaches wherein M and N are the same prime number S ( Penumarthi : Paragraph(s) 0049 teach(es) the dimensions of initial matrix shall be those prime numbers ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Li and Motoya to incorporate the teachings of Penumarthi for wherein M and N are the same prime number S . There is motivation to combine Penumarthi into the combination of Li and Motoya because Penumarthi ’s teachings of prime numbers for matrix dimension would facilitate to use ne u ral network ( Penumarthi : Paragraph(s) 0049 ). Regarding Claim s 5 and 15 , the combination of Li and Motoya teaches all the limitations of claim s 4 and 14 above; however the combination does not explicitly teach wherein M and N are the same prime number S, and at least a portion of the matrix is constituted by a square matrix of which the number of elements is a power of the prime number S . Penumarthi from same or similar field of endeavor teaches wherein M and N are the same prime number S, and at least a portion of the matrix is constituted by a square matrix of which the number of elements is a power of the prime number S ( Penumarthi: Paragraph(s) 0049 teach(es) the dimensions of initial matrix shall be those prime numbers). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Li and Motoya to incorporate the teachings of Penumarthi for wherein M and N are the same prime number S, and at least a portion of the matrix is constituted by a square matrix of which the number of elements is a power of the prime number S . There is motivation to combine Penumarthi into the combination of Li and Motoya because Penumarthi ’s teachings of prime numbers for matrix dimension would facilitate to use ne u ral network ( Penumarthi : Paragraph(s) 0049 ). Regarding Claim 14 , the combination of Li , Motoya , and Penumarthi teaches all the limitations of claim 3 above; and Li further teaches wherein the scaling relationship is recursively established within at least a portion of the matrix by repeating the establishment of the scaling relationship within the submatrices that constitute the first submatrix ( Li: Paragraph(s) 0009, 0208 teach(es) Some of the advanced neural network models might have hundreds of layers and billions of connections, and the implementation thereof is both calculation-centric and memory-centric. Since neural networks are becoming larger, it is critical to compress neural network models into smaller scale; it divides a dense matrix by regularly extracting one row out of every N rows, so as to form N submatrices of same size ). Regarding Claim 1 7 , the combination of Li , Motoya, and Penumarthi teaches all the limitations of claim 3 above; and Li further teaches wherein the inference model is constituted by a neural network, and each element of the matrix is configured to correspond to a weight of connection between neurons in the neural network ( Li: Paragraph(s) ) 0005, 0051-0055 teach(es) I n neural networks, there exists a large number of nodes (also called neurons) which are connected to each other. Neural networks have two features: 1) Each neuron calculates the weighted input values from other adjacent neurons via certain output function (also called Activation Function); 2) The information transmission intensity between neurons is measured by so-called weights, and such weights might be adjusted by self-learning of certain algorithms ). Regarding Claim 20 , the combination of Li , Motoya, and Penumarthi teaches all the limitations of claim 3 above; however the combination does not explicitly teach wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect . Motoya further teaches wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect ( Motoya: Paragraph(s) 0037, 0046 teach(es) a vector data which is a part of the middle layer by applying the matrix vector product of the first half to the vector data which is a part of the image data; to maintain a level of accuracy so that the maximum value can be detected in the subsequent pooling calculation pool according to the present embodiment). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Li , Motoya, and Penumarthi to incorporate the teachings of Motoya for wherein the target data is constituted by image data showing a product, and the inference task is to determine whether the product shown in the image data has a defect. There is motivation to combine Motoya into the combination of Li , Motoya, and Penumarthi because Motoya’s teachings of processing of image data would facilitate to use neural network ( Motoya : Paragraph(s) 0037, 0046 ). Allowable Subject Matter Claim 11 would be allowable if rewritten or amended overcome the rejection(s) under the 101 rejections set forth in this Office Action. The prior arts do not teach the specific steps recited in the claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zadeh (US 20220121884 A1) teaches System And Method For Extremely Efficient Image And Pattern Recognition And Artificial Intelligence Platform, including inference, camera, gradient, and diagonal . Ciftci (US 20210209388 A1) teaches Fakecatcher: Detection Of Synthetic Portrait Videos Using Biological Signals , including submatrix, inference, weight, and gradient . Chen (US 20210110089 A1) teaches Generating Computer Simulations Of Manipulations Of Materials Based On Machine Learning From Measured Statistics Of Observed Manipulations , including medical, MRI, CT, and scaling . Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CLAY LEE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-3309 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8-5pm EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Neha Patel can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)270-1492 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CLAY C LEE/ Primary Examiner, Art Unit 3699