Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 8-12, 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yonetani, US 2021/0272011 A1.
Regarding Claim 1, Yonetani teaches:
A method comprising, by first computer equipment: obtaining an input data point comprising a set of values of all elements of an input feature vector, each being a value of a different element of the input feature vector, the input feature vector comprising a plurality of fields, each field comprising one or more of the elements of the feature vector (paragraph 5: “inputting an input data set into a first private artificial intelligence model”; And, paragraph 28: “The input data set 50 may include a plurality of input data entries”; And, “the input data set 50 may be a partially labeled data set”. The labeled plurality data entries representative of values of elements of the input feature vector. See also paragraph 26, where the client computing devices are considered as the first computer equipment);
inputting the input data point to a first machine learning model on the first computer equipment to generate at least one associated output label based on the input data point (paragraph 5: “inputting an input data set into a first private artificial intelligence model”; And, “receiving a first result data set from the first private artificial intelligence model as a result of applying the first private artificial intelligence model to the input data set”. The first result data set representative of the associated output label);
sending a partial data point to second computer equipment, the partial data point comprising the values of only part of the feature vector, said part comprising one or more of the fields of the feature vector but not one or more others of the fields (paragraph 28: “the input data set 50 may be a partially labeled data set in which a subset of the input data entries 52 have respective classification labels”; the partially labeled data set representative of the partial data point. And, paragraph 36: “the server computing device 10 may be further configured to train an adaptive co-distillation model 60 with the input data set”. The server representative of the second computer equipment. And paragraph 50: “in embodiments in which the adaptive co-distillation model 60 is a classification model, the input data set 50 may be a partially labeled data set”. This partially labeled data set being a subset, that is different from the complete input data set, of the input data, as described in paragraph 98: “the input data set may be a partially labeled data set including a first subset of input data entries that have respective input classification labels and a second subset of input data entries that do not have respective input classification labels”);
and sending the associated label to the second computer equipment in association with the partial data point, thereby causing the second computer equipment to train a second machine learning model on the second computer equipment based on the partial data point and the associated label (paragraph 5: “In a first training phase, the method may further include training an adaptive co-distillation model with the input data set as an input and the first result data set”. The co-distillation model representative of the second machine learning model).
Regarding Claim 2, Yonetani further teaches:
The method of claim 1, wherein one of the plurality of fields comprises an audio field, the values of which comprise audio data of a person's speech (paragraph 77: “The second private artificial intelligence model is another recurrent neural network configured to distinguish between the speech of multiple people whose speech is included in an audio input”),
and another of the plurality of fields comprises a video field, the values of which comprise video data of the person's lips or face while speaking said speech (paragraph 80: “The adaptive co-distillation model is a regression model that is configured to receive video footage as an input”. Video footage can necessarily be of any type and including the person's lips or face while speaking);
wherein the first model is arranged to perform speech-to-text conversion based on the input feature vector, and the second model is arranged to perform the speech-to-text conversion based on said part of the feature vector, the output label comprising the text; and wherein said part of the feature vector comprises the audio field but not the video field (paragraph 77: “the first private artificial intelligence model is a recurrent neural network configured to generate a text transcription of speech. The second private artificial intelligence model is another recurrent neural network configured to distinguish between the speech of multiple people whose speech is included in an audio input. Using the outputs produced by the two recurrent neural networks when given a shared set of audio inputs, the adaptive co-distillation model may be trained to generate text transcriptions of speech included in an audio input and to tag each utterance in the transcription with an indication of which person spoke it. This is achieved without sharing of the individual recurrent neural networks themselves or the data used to train each recurrent neural network”).
Regarding Claim 3, Yonetani further teaches:
The method of claim 1, wherein one of the plurality of fields comprises an image field, the values of which comprise image data, and another of the plurality of fields comprises an inertial sensor data field, the values of which comprise inertial sensor data from one or more sensors measuring motion of a camera while capturing the image data (paragraph 55: “an image of an egg being grasped by a robot, in the input data set”; And “a tagged collection of images of golf balls being grasped, or not grasped as the case may be, by a robot, the tag being a ground truth tag indicating whether the image shows the robot properly grasping the ball”. The robot can have a camera and sensors that can measure their respective data, see paragraph 4);
wherein the first model is arranged to perform image recognition to detect an object based on the input feature vector, and the second model is arranged to perform the image recognition based on said part of the input feature vector, the output label comprising an indication of the object; and wherein said part of the feature vector comprises the image data field but not the inertial sensor data field (paragraph 78: “In another example use case scenario, the first private artificial intelligence model is a recurrent neural network configured to control the motion of a robotic arm to pass manufactured items from one area in a factory to another area. The second private artificial intelligence model is another recurrent neural network configured to the movement of an autonomous robot as the robot navigates a physical environment. The shared input data set given to the first private artificial intelligence model and the second private artificial intelligence model includes layout data indicating the sizes, shapes, and positions of objects in a factory environment. Using the respective outputs of the first private artificial intelligence model and the second private artificial intelligence model, the adaptive co-distillation model is trained to output combined movement paths by which the manufactured items are moved from one area of the factory environment to another area of the factory environment. In each combined movement path, a manufactured item is moved from an initial location to the autonomous robot by the robotic arm and is then moved to a final location by the autonomous robot. The adaptive co-distillation model is trained to generate the combined movement paths without the manufacturer of the robotic arm and the manufacturer of the autonomous robot having to give the user who trains the adaptive co-distillation model access to their private machine learning models”; And paragraph 79: “In other example use case scenario, an adaptive co-distillation model is trained for use in a medical setting. In this example, the first private artificial intelligence model is a support vector machine configured to identify which bones are present in an x-ray image. The second private artificial intelligence model is a convolutional neural network configured to determine whether a bone in an x-ray image is fractured. An input data set including a plurality of x-ray images is input into both the first private artificial intelligence model and the second private artificial intelligence model, and the outputs of the private models are used to train the adaptive co-distillation model. The trained adaptive co-distillation model is configured to receive x-ray images and output respective labels that indicate which bones, if any, that appear in the x-ray image are fractured”).
Regarding Claim 4, Yonetani further teaches:
The method of claim 1, comprising, by the first computer equipment in an initial training phase prior to the obtaining of said input data point: obtaining a plurality of initial data points, each comprising a respective set of values of some or all of the fields of said feature vector; sending each of said plurality of initial data points to the second computer equipment, and in response, receiving back associated labels generated by the second model based on the initial data points; and training the first model based on the initial data points and the associated labels received from the second computer equipment (paragraph 2: “In distributed machine learning, a machine learning algorithm is trained using data distributed between multiple computing devices. Each of those computing devices may store its own set of training data that are used to locally train a machine learning model. Those machine learning models may then be combined into a centralized model”. The limitation describes the process of distributed learning as described above in paragraph 2, see also paragraph 63. Examiner’s note: See also Wang, US 2019/0318268 A1, for example paragraph 5).
Regarding Claim 5, Yonetani further teaches:
The method of claim 4, comprising, by the first computer equipment in a subsequent training phase following the sending of the label to the second computer equipment: sending a further data point to the second computer equipment, the further data point comprising values of some or all of the fields of the feature vector, and in response receiving back a further label generated by the second model based on the further data point; and updating the training of the first model based on the further label and further data point (paragraph 111: “the processor may be further configured to train a template machine learning model on a template data set. The processor may be further configured to transmit the template machine learning model to the first client computing device and the second client computing device. The first private artificial intelligence model may be a first copy of the template machine learning model that has been further trained on the first private data set. The second private artificial intelligence model may be a second copy of the template machine learning model that has been further trained on the second private data set”. That is the first and second private artificial intelligence models, that is equivalent to the first model, is further trained or updated. Examiner’s note: see also Tan, US 2018/0240011 A1, for example paragraph 46).
Claims 8-12 are similar to Claims 1-5 and are rejected under the same rationale as stated above for those claims.
Claim 15 is similar to Claim 1 (the last two “receiving” limitations in the claim also being taught by Yonetani, see for example paragraphs 5, 36-37) and is rejected under the same rationale as stated above for that claim.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Tan, US 2018/0240011 A1, teaches distributed machine learning.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVE MISIR/Primary Examiner, Art Unit 2127