DETAILED ACTION This Office Action is sent in response to the Applicant’s Communication received on 08/10/2023 for application number 18/447,675 . The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, IDS, and Claims. Claims 1-5 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim 4 recites the limitation " extracting the common feature vector " in line 6. There is insufficient antecedent basis for this limitation in the claim. It is unclear whether claimed the aforementioned limitation is referring to “ common feature vector of a previous step ” or “ common feature vector of the current step ”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1-5 are directed to a method and are therefore directed one of the four statutory categories of patent eligible subject matter. Claim 1 Step 2A Prong 1: Claim 1 recites: “ extracting a common feature vector of the current step based on the common feature vector of the previous step and the input data corresponding to the current task; ” Extracting a common feature vector of the current step based on the common feature vector of the previous step and the input data corresponding to the current task is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “ extracting an output feature vector corresponding to the current task based on the common feature vector of the current step; ” Extracting an output feature vector corresponding to the current task based on the common feature vector of the current step is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “ An operation method of a multi-task learning model ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). “ obtaining a common feature vector of a previous step; receiving input data corresponding to a current task executed in a current step from among a plurality of tasks; ” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)) . “ outputting output data corresponding to the current task based on the output feature vector corresponding to the current task ;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “ An operation method of a multi-task learning model ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept . “ obtaining a common feature vector of a previous step; receiving input data corresponding to a current task executed in a current step from among a plurality of tasks; ” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) . The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept. “ outputting output data corresponding to the current task based on the output feature vector corresponding to the current task ;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 2 Step 2A Prong 1: Claim 2 recites: “ extracting a first feature vector based on the common feature vector of the previous step; ” Extracting a first feature vector based on the common feature vector of the previous step is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “ extracting a second feature vector based on the input data corresponding to the current task; ” Extracting a second feature vector based on the input data corresponding to the current task is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “ extracting the common feature vector of the current step based on the first feature vector and the second feature vector ;” Extracting the common feature vector of the current step based on the first feature vector and the second feature vector is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two and Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible. Claim 3 Step 2A Prong 1: Claim 3 recites: “ extracting the second feature vector including common feature information of the input data corresponding to the current task ;” Extracting the second feature vector including common feature information of the input data corresponding to the current task is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “ extracting the first feature vector including common feature information over time by inputting the common feature vector of the previous step to a first model ;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)) . “ inputting the input data corresponding to the current task to a second model ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “ extracting the first feature vector including common feature information over time by inputting the common feature vector of the previous step to a first model ;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) . The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept. “ inputting the input data corresponding to the current task to a second model ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept . Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 4 Step 2A Prong 1: Claim 4 recites: “ determining a weight between the first feature vector and the second feature vector; ” Determining a weight between the first feature vector and the second feature vector is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “ extracting the common feature vector through an inner product calculation between the common feature vector of the previous step and the first feature vector and the second feature vector, based on the determined weight ;” Extracting the common feature vector through an inner product calculation between the common feature vector of the previous step and the first feature vector and the second feature vector, based on the determined weight is a claim that merely uses textual replacements for particular equations, and is therefore a mathematical concept . Step 2A Prong Two and Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible. Claim 5 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “ wherein the number and type of input data corresponding to the current task vary for each step ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “ wherein the number and type of input data corresponding to the current task vary for each step ;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 is rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Jin et al. (CN115861893A, see attached translation), hereinafter Jin . Regarding claim 1, Jin teaches, An operation method of a multi-task learning model [Para 0006, This invention provides a multi-vehicle tracking method and system based on visible light images to address the technical problem of the inapplicability of existing multi-target tracking algorithms to multi-vehicle tracking problems] , the operation method comprising: obtaining a common feature vector of a previous step [ Para 0034, all image frames input to the feature extraction network need to be preprocessed; Para 0039, Specifically, the pre-extracted features F<sub>1,t-1</sub> from the previous frame image are extracted from the feature library G ] ; receiving input data (Para 0033-0035, all image frames input to the feature extraction network) corresponding to a current task executed in a current step from among a plurality of tasks ( Para 0036, layers of feature extraction network ) [Para 0033-0035, In step S1, the current input image frame is preprocessed . Specifically, based on TransTrack, for the current image frame I<sub>t</sub> to be tracked, all image frames input to the feature extraction network need to be preprocessed. The preprocessing operations include: size resizing, center cropping, horizontal flipping, and normalization … In step S2, the preprocessed image frame is input into the feature extraction network E -ResNet50 for preliminary feature extraction to obtain the pre-extracted features of the current frame ; Para 0036, the feature extraction network E-ResNetSO includes a cascaded 7 X 7 convolutional layer, a max pooling layer, four residual blocks, and a receptive field enhancement block. The four residual blocks are each composed of a cascaded 1 X 1, a 3 X3, and a 1 X 1 convolutional layer. The receptive field enhancement block is composed of two 1X1, two 3X3, and one 7X7 convolutional layers connected in parallel, so as to map the preprocessed image frame into a high-dimensional feature space ] ; extracting a common feature vector (Para 0038, fused feature map) of the current step based on the common feature vector of the previous step and the input data corresponding to the current task [Para 0038, In step S3, the pre-extracted features of the current frame are fused with the pre-extracted features of the previous frame; Para 0039, Specifically, the pre-extracted features F<sub>1,t-1</sub> from the previous frame image are extracted from the feature library G, and F<sub>1,t</sub> and F<sub>1,t-1</sub> are fused to obtain the fused feature map F<sub>1,t,t-1</sub>] ; extracting an output feature vector (Para 0047, vector features) corresponding to the current task based on the common feature vector of the current step [Para 0047, The extracted feature maps are then input into the channel splitting and adjustment module again to achieve a transformation from image space to vector space, resulting in vector features containing local information. These vector features are then fused with the global features to obtain the output of the context-aware coding layer that takes into account both global and local features] ; and outputting output data (Para 0054, output the detected target position) corresponding to the current task based on the output feature vector corresponding to the current task [Para 0054, The first decoding layer is mainly used to detect the aircraft target in the current frame and output the detected target position B<sub>Dt</sub> and the corresponding feature F<sub>DL1-4, Dt</sub> in the current frame] . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Han et al. (CN111782852A, see attached translation), hereinafter Han. Regarding claim 2 , Jin teaches the limitations of claim 1 including the common feature vector of the previous step (Jin, para 0034 and 0039) and the common feature vector of the current step (Jin, Para 0038) . Jin does not teach extracting a first feature vector based on feature of previous step ; extracting a second feature vector based on the input data corresponding to current task ; and extracting feature of current step based on the first feature vector and the second feature vector . Han teaches, extracting a first feature vector (Para 0020, to obtain the semantic feature vector) based on feature of previous step (Para 0019, (2) Use the final CNN-RNN network model to extract the image titles of all images) [Para 0019, (2) Use the final CNN-RNN network model to extract the image titles of all images in the image library to be retrieved, i.e. the text features corresponding to the images, and store the extracted text features in the database; Para 0020, (3) Using the word vector model built into the gensim library, add the word vectors of each word in the text features, and take the average of the sum to obtain the semantic feature vector corresponding to each text feature and store it] ; extracting a second feature vector (Para 0021, extract the text features of the query image) based on the input data corresponding to current task (Para 0022, to obtain similar semantic feature vectors) ; and extracting feature of current step (Para 0022, to obtain similar semantic feature vectors) based on the first feature vector and the second feature vector [ Para 0021, (4) Use the final CNN-RNN network model to extract the text features of the query image and extract its corresponding semantic feature vector; Para 0022, (5) Use the cosine similarity comparison method to compare the semantic feature vector of the query image with the semantic feature vector of other images in the image library to obtain similar semantic feature vectors ] . Han is analogous to the claimed invention as they both relate to deep learning feature extraction . Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jin’s teachings to incorporate the teachings of Han and provide extracting feature s based on first feature vector and second feature vector in order to [Han, para 0025] effectively extract high level concepts while gaining the benefit of the various learning tasks . Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Han , and in further view of HAJIMIRSADEGHI et al. ( US 20200076841 A1 ), hereinafter Hajimirsadeghi . Regarding claim 3, Jin-Han teach the limitations of claim 2 including the extracting of the first feature vector (claim 2: Han, Para 0019 ) and the extracting of the second feature vector (claim 2: Han, Para 0021 ) . Jin-Han do not teach extracting first feature vector including common feature information over time by inputting the common feature vector of the previous step to a first model , and inputting the input data corresponding to the current task to a second model and extracting the second feature vector including common feature information of the input data corresponding to the current task . Hajimirsadeghi teaches, extracting first feature vector ( Para 0207, generates dense feature vector ) including common feature information over time ( Para 0207, previous sparse feature vectors ) by inputting the common feature vector of the previous step ( Para 0207, from previous recurrent steps ) to a first model ( Para 0152, Each recurrent step may contain an MLP ) , and inputting the input data corresponding to the current task (Para 0207, sparse feature vector 1123 as direct input to recurrent step ) to a second model (Para 0152, Each recurrent step may contain an MLP ) and extracting the second feature vector (Para 0207, dense feature vector ) including common feature information of the input data (Para 0207, sparse feature vector ) corresponding to the current task (Para 0207, the current log message ) [ Para 0152, RNN 720 contains multiple recurrent steps, such as 721-723. Each recurrent step may contain an MLP … Each recurrent step 721-723 corresponds to a sequential time step, such as one for each packet of a network flow ; Para 0207, In step 1206, the encoder RNN outputs a respective embedded feature vector that is based on features of the current log message and log messages that occurred earlier in the sequence of related log messages. For example, recurrent step 1133 generates dense feature vector 1143 based on sparse feature vector 1123 as direct input to recurrent step 1133 and also based on cross activation by internal state from previous recurrent steps 1131-1132 that is based on previous sparse feature vectors 1121-1122. Thus, feature embedding into dense feature vector 1143 is contextually based on multiple log messages of original log sequence 1110 ] . Hajimirsadeghi is analogous to the claimed invention as they both relate to deep learning feature extraction . Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jin’s teachings to incorporate the teachings of Hajimirsadeghi and provide extracting feature vectors by inputting information into models in order to [Han, para 0025] effectively extract high level concepts while gaining the benefit of the various learning tasks . Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Han , and in further view of Yan et al. ( CN115267672A , see attached translation ), hereinafter Yan . Regarding claim 4, Jin-Han teach the limitations of claim 2 including the extracting of the common feature vector of the current step ( Han, Para 0022 ) and the common feature vector (claim 1) . Jin-Han do not teach determining a weight between first feature vector and second feature vector; and extracting common feature vector through an inner product calculation between common feature vector of the previous step and first feature vector and second feature vector, based on the determined weight . Yan teaches, determining a weight between first feature vector and second feature vector ( Para 0109, the weights of each feature in the CNN are updated ) ; and extracting common feature vector (Para 0111, deeply mine the comprehensive features … to extract the required feature information ) through an inner product calculation between common feature vector of the previous step and first feature vector ( Para 0109, the dot product of the two is performed ) and second feature vector ( Para 0109, the feature map is multiplied by the original CNN feature map ) , based on the determined weight (Para 0109, the weights of each feature in the CNN are updated ) [ Para 0109, First, the feature maps of each channel are separated. Since the input features of this invention are 10 channels, the vectors of each channel are resized and the dot product of the two is performed. The significance of this step is that the (i, j) coordinates in the subsequent attention mechanism mapping map are the influence of the i-th element and the j-th element in that channel, thus realizing the dependency relationship between any two elements in the entire feature map. Then, the attention mechanism mapping feature map is obtained by normalizing through softmax. Finally, the feature map is multiplied by the original CNN feature map, and the weights of each feature in the CNN are updated. As learning deepens, the individual features of the original feature map receive the weights updated by the attention mechanism, which means they gain global dependencies at any position ; Para 0111, The convolutional block focuses on expanding the channel dimension to deeply mine the comprehensive features, while compressing the feature values in the time dimension to extract the required feature information ] . Yan is analogous to the claimed invention as they both relate to deep learning feature extraction . Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jin’s teachings to incorporate the teachings of Yan and provide extracting common feature vector through an inner product calculation between common feature vector of the previous step and first feature vector and second feature, based on the determined weight in order to [Yan, Para 0111] improve deep learning models by mining comprehensive features . Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Li et al. ( CN111176850A , see attached translation), hereinafter Li . Regarding claim 5, Jin teach the limitations of claim 1. Jin does not teach wherein the number and type of input data corresponding to the current task vary for each step . Li teaches, wherein the number and type of input data corresponding to current task (Abstract, target operation ) vary for each step (Abstract, when a target operation is detected ) [Abstract, when a target operation is detected, obtaining a target value corresponding to the target operation; obtaining a target task from a multidimensional array based on the target value, and adding the target task to the data pool; wherein the multidimensional array includes task types, the number of tasks corresponding to different task types, and a range of task weight values corresponding to different task types; the target value is a random integer value within a preset range; the preset range is determined based on the range of task weight values. The technical solution of this invention predetermines each task and adds each task to a data pool, then retrieves tasks based on the established data pool ] . Li is analogous to the claimed invention as they both relate to multi-task machine learning . Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jin’s teachings to incorporate the teachings of Li and provide wherein the number and type of input data corresponding to the current task vary for each step in order to [Li, abstract] avoid problems of hot data and slow system response, and improve the user experience . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT SYED RAYHAN AHMED whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-0286 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon-Fri ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT David Yi can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-7519 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED RAYHAN AHMED/ Examiner, Art Unit 2126 /DAVID YI/ Supervisory Patent Examiner, Art Unit 2126