DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is in response to the applicant’s reply filed December 19, 2025. In the applicant’s reply; claims 1 and 19 were amended, and claim 3 was cancelled. Claims 1-19 are pending in this application.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Responses to Applicant’s Remark
Applicants' amendments filed on December 19, 2025 have been fully considered. The amendments overcome the following rejections set forth in the office action mailed on Sept. 19, 2025.
Applicant’s amendments overcome the objection to the title of the specification, and the objection is hereby withdrawn.
Applicant’s amendments overcome the rejections of Claims 1-19 under 35 U.S.C. 102(a)(1) as being anticipated by Yamazaki et al. (US PGPub US2019/0091869 A1 published on March 28, 2019, hereby referred to as “Yamazaki”), and the rejection is hereby withdrawn.
Applicant’s amendments overcome the rejections of Claims 1 and 19 under 35 U.S.C. 103 as being unpatentable over Yamazaki et al. (US PGPub US2019/0091869 A1 published on March 28, 2019, hereby referred to as “Yamazaki”), in view of Gilboa-Soloman et al. (US PGPub US2022/0012872 A1), hereby referred to as “Gilboa-Soloman”, and the rejection is hereby withdrawn.
Applicant's arguments with respect to claims 1-19 have been considered but are moot in view of the new grounds of rejection, as presented below, necessitated by applicant’s amendments.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on December 25, 2020. It is noted, however, that applicant has not filed a certified copy of the JP2020-216583 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-19 are rejected under 35 U.S.C. § 103 as being unpatentable over Yamazaki et al. (US PGPub US2019/0091869 A1 published on March 28, 2019, hereby referred to as “Yamazaki”) in view of Wang et al. (US PGPub US2019/0204102 A1) and the admitted prior art and further in view of Roberts.
Consider Claims 1 and 19.
Yamazaki teaches:
1. (Original) A machine learning device comprising:/ 19. (Original) A machine learning method executed by a computer, the machine learning method comprising: (Yamazaki: abstract, To select a picking position of a workpiece in a simpler method. A robot system includes a three-dimensional measuring device for generating a range image of a plurality of workpieces, a robot having a hand for picking up at least one of the plurality of workpieces, a display part for displaying the range image generated by the three-dimensional measuring device, and a reception part for receiving a teaching of a picking position for picking-up by the hand on the displayed range image. The robot picks up at least one of the plurality of workpieces by the hand on the basis of the taught picking position. [0013] (4) Another robot system (for example, a robot system 1 a described below) according to the present invention includes a three-dimensional measuring device (for example, a three-dimensional measuring device 40 described below) for generating a range image of a plurality of workpieces (for example, workpieces 50 described below), a robot (for example, a robot 30 described below) having a hand for picking up at least one of the plurality of workpieces, a display part (for example, a display part 12 described below) for displaying the range image generated by the three-dimensional measuring device, and a picking position taught on the range image, a reception part (for example, an operation reception part 13 described below) for receiving a teaching of the picking position for picking-up by the hand on the displayed range image, and a learning part (for example, a learning part 14 described below) for performing machine learning, by using three-dimensional point group information of the taught picking position and a periphery of the taught picking position as input data, and using as a label at least one of an evaluation value based on the teaching to the three-dimensional point group information used as the input data and an evaluation value according to success or failure in the picking-up, and thereby for building a learning model for outputting an evaluation value of the three-dimensional point group information input as the input data. The robot picks up at least one of the plurality of workpieces by the hand on the basis of the taught picking position.)
1. an acquisition unit configured to acquire training data and inference data for use for machine learning; / 19. an acquisition step of acquiring training data and inference data for use for machine learning; (Yamazaki: [0039] The image processing apparatus 10 a acquires and stores the range image data of the periphery of the picking position (hereinafter referred to as “teaching position peripheral image data”) taught by the user. It is noted that the suitable position as a candidate picking position herein is, for example, a position where the height of its three-dimensional point is high. [0040] In the case of selecting the picking position of the workpiece 50 thereafter, the image processing apparatus 10 a searches the range image for the image region matched to the stored teaching position peripheral image data. [0041] The image processing apparatus 10 a further performs machine learning on the basis of the teaching result, and builds a learning model for selecting the picking position of the workpiece 50. Then, the image processing apparatus 10 a selects a new picking position from the range image on the basis of the built learning model. The robot system 1 a with such a configuration is capable of selecting the picking position with higher accuracy. The outline of the robot system 1 a has been described so far. The following descriptions are about respective devices included in the robot system 1 a.)
1. a training unit configured to perform machine learning based on the training data and a plurality of sets of training parameters, and generate a plurality of trained models; / 19. a training step of performing machine learning based on the training data and a plurality of sets of training parameters, and generating a plurality of trained models; (Yamazaki: [0054] The learning part 14 is a unit for performing processing relating to machine learning. The learning part 14 includes a learning processing part 141, a learned model storage part 142, and an estimation processing part 143. The learning processing part 141, which is a unit for performing machine learning, performs deep learning using, for example, a convolution neural network. The learned model storage part 142 is a unit for storing parameters of a learning model in progress of learning by the learning processing part 141 and parameters of learned models in machine learning. The estimation processing part 143 performs estimation using the learned models stored in the learned model storage part 142, for the sake of the selection of the picking position by the selection processing part 11. The machine learning to be performed by these respective units will be described in detail later in the item of <Machine learning>. [0055] The functional blocks included in the image processing apparatus 10 a have been described so far. In order to realize these functional blocks, the image processing apparatus 10 a includes an arithmetic processing unit such as a CPU (Central Processing Unit). The image processing apparatus 10 a further includes an auxiliary storage device such as a HDD (Hard Disk Drive) for storing various control programs such as application software and OS (Operating System), main storage device such as a RAM (Random Access Memory) for temporarily storing data required when the arithmetic processing unit executes a program.)
1. a model evaluation unit configured to evaluate whether trained results of the plurality of trained models are good or bad and display evaluated results; / 19. a model evaluation step of evaluating whether trained results of the plurality of trained models are good or bad and displaying evaluated results; (Yamazaki: [0068] FIG. 6 shows one example of the point group information for matching as point group information for matching 80. Although a teaching point is drawn merely for explanation in the example of FIG. 6, the teaching position peripheral image data included in the point group information for matching 80 includes no teaching position drawn. In the following description, the indication of “80” which is the reference numeral given to the point group information for matching will be omitted. [0069] The range size of the three-dimensional point group to be acquired as point group information for matching is previously set on the basis of the size of the workpieces 50 or the like. It is noted that the point group information for matching may be used in such a manner that the three-dimensional point group information having a somewhat large range is stored as point group information for matching, and thereafter the setting thereof is adjusted so that the point group information for matching is trimmed and acquired in a desired size when the stored point group information for matching is used in the matching described below. [0070] It is noted that the annotation processing part 112 may add additional information to the point group information for matching. For example, information indicating the features of the point group information for matching may be added. The information indicating features herein is, for example, information such as of an average of heights of the three-dimensional points of the plurality of pixels included in the point group information for matching, or the height of the three-dimensional points of the pixels corresponding to the teaching position.)
1. a model selection unit capable of accepting selection of a trained model; / 19. a model selection step of enabling acceptance of selection of a trained model; (Yamazaki: [0074] The following description is about the matching using the point group information for matching performed by the matching part 113 in order that the workpiece 50 is picked up. In order that the workpiece 50 is picked up, the matching part 113 acquires the range image of the workpieces 50 in bulk stacked in the container 60, from the three-dimensional measuring device 40. Then, the matching part 113 searches the acquired range image on the basis of the teaching position peripheral image data included in each piece of point group information for matching stored in the selection data storage part 111, by using a matching method of three-dimensional point groups, for example, ICP (Iterative Closest Point) matching. Then, the matching part 113 selects, for example, the central position of the image region having a high evaluation of matching in the range image, as the picking position of the workpiece 50 to be picked up. It is noted that the matching part 113 may select the plurality of image regions whose evaluations of matching are higher than a threshold value, and may use the image region having the highest height from the ground among the plurality of image regions, as new teaching position peripheral image data. [0075] The matching part 113 transmits the selected picking position of the workpiece 50 to the robot controller 20. Then, the robot controller 20 controls the robot 30 on the basis of the received picking position of the workpiece 50 so that picking up of the workpiece 50 is tried to be performed.)
1. an inference calculation unit configured to perform an inference calculation process based on at least a part of the plurality of trained models, and the inference data, generate inference result candidates; / 19. an inference calculation step of performing an inference calculation process based on at least a part of the plurality of trained models, and the inference data, generating inference result candidates; (Examiner Note: matching part 113 performs matching based on each piece of point group information for matching and is analgous in scope to the inference calculation unit; Yamazaki: [0076] Herein, although the point group information for matching is generated on the basis of the teaching by the user as described above, all the pieces of point group information for matching are not necessarily appropriate. In an example, in some cases, the picking-up may succeed in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, but may fail in the case where a region having a high evaluation of matching with another piece of point group information for matching is set as a picking position. As described above, in some cases, the success or failure may depend on a piece of the point group information for matching. Thus, the matching part 113 may evaluate each piece of point group information for matching, thereby imparting an evaluation value to each piece of point group information for matching. Then, the matching part 113 preferably uses the point group information for matching having a high evaluation value. The matching part 113 stores the point group information for matching having the imparted evaluation values in the selection data storage part 111, in order that the point group information for matching having the imparted evaluation values is used as teacher data in the machine learning described later. It is noted that the point group information for matching having low evaluation values is also necessary as teacher data (failure data) for the machine learning. Therefore, the matching part 113 stores not only the point group information for matching having high evaluation values but also the point group information for matching having low evaluation values, as teacher data in the selection data storage part 111. [0077] The matching part 113 is capable of imparting an evaluation value depending on the success or failure in picking up the workpiece 50. In an example, in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, and further where the picking-up of the workpiece 50 succeeds, the matching part 113 imparts a higher evaluation value than the case where the picking-up fails. In an example, the matching part 113 imparts a first predetermined value or higher (for example, 60 points or higher) in the case where the picking-up of the workpiece 50 succeeds, and imparts a second predetermined value or lower (for example, 50 points or lower) in the case where the picking-up of the workpiece 50 fails. In an example, in the case where the picking-up of the workpiece 50 succeeds, the matching part 113 may impart an evaluation value further differently depending on the time taken for the picking-up. In an example, as the time taken for picking up the workpiece 50 is shorter, the matching part 113 may impart a higher evaluation value. In another example, in the case where the picking-up of the workpiece 50 fails, the matching part 113 may impart an evaluation value differently depending on the degree of failure. In an example, in the case where the workpiece 50 has been gripped but has fallen in the middle of the picking-up, the matching part 113 may impart a higher evaluation value than the case where the workpiece 50 has not been gripped.)
1. and an inference decision unit configured to output all, or a part, or a combination of the inference result candidates. / 19. and an inference decision step of outputting all, or a part, or a combination of the inference result candidates. (Yamazaki: [0088]-[0089] The output layer outputs an evaluation value with respect to the picking-up based on the point group information for matching used as input data, on the basis of the output from the fully connected layer. Then, the error between the output from the output layer and the label is calculated. The label herein is the evaluation value imparted to the point group information for matching used as input data, as described above. [0090] At the start of the learning, since each parameter included in the convolution neural network is not appropriately weighted, the error may likely have a large value. Therefore, the learning processing part 141 corrects the weighting value so as to reduce the calculated error. Specifically, the processing called forward propagation or back propagation is repeated in order to reduce the error, thereby changing a weighting value of each perceptron included in the convolution neural network. [0091] In such a way, the learning processing part 141 learns the features of the teacher data, and inductively acquires the learning model for outputting the evaluation value from the point group information for matching used as input data. The learning processing part 141 stores the built learning model as a learned model in the learned model storage part 142.)
Yamazaki does not teach:
1/19. a training unit/step configured to perform machine learning for a plurality of times by setting, for each of a plurality of the training data, a plurality of sets of training parameters for a plurality of times, and generate a plurality of trained models;
1/19. a model selection unit/step configured to select a trained model based on the evaluated results by the model evaluation unit;
Wang teaches:
1. (Original) A machine learning device comprising:/ 19. (Original) A machine learning method executed by a computer, the machine learning method comprising: (Wang: abstract, A method and an apparatus for estimating travel time are provided. [0010] In an embodiment, a plurality of verification samples is also built based on the historical travel data. Each of the plurality of verification samples includes a second one of the plurality of historical travel routes, the road traffic information for the second one of the plurality of historical travel routes, and the actual travel time of the second one of the plurality of historical travel routes. The plurality of verification samples is different from the plurality of training samples. In the disclosed method, for each of the plurality of verification samples, the respective road traffic information for the respective verification sample is fed as an input to the travel time calculation model, and the travel time calculation model is applied to calculate a respective estimated travel time of the respective verification sample. Further, a quality evaluation parameter of the travel time calculation model is calculated based on the respective actual travel time and the respective estimated travel time of each of the plurality of the verification samples. The quality evaluation parameter indicates a prediction accuracy of the travel time calculation model. Next, whether the quality evaluation parameter satisfies a preset condition is detected. The plurality of training samples are adjusted when the quality evaluation parameter does not satisfy the preset condition, and the adjusted plurality of training samples are trained by using the machine learning algorithm to obtain another travel time calculation model. [0034], Figure 1, [0051], Figure 2A)
1. an acquisition unit configured to acquire training data and inference data for use for machine learning; / 19. an acquisition step of acquiring training data and inference data for use for machine learning; (Wang: [0034] Referring to FIG. 1, FIG. 1 shows a flowchart of a travel time prediction method according to an embodiment of this application. The method may include the following steps: [0035] Step 101: Obtain a travel time prediction request, the travel time prediction request being used for requesting for predicting an estimated travel time of a target travel route from a starting point to an end point. [0036] A server obtains the travel time prediction request. For example, the server receives a travel time prediction request sent by a terminal. The target travel route may be a set travel route customized by a user, or may be a travel route planned and generated by the terminal according to a route planning condition (such as a starting point and an end point) set by the user. [0037] Step 102: Obtain a whole route feature corresponding to the target travel route, the whole route feature including a feature used for indicating a road traffic status at a current moment (or current time).)
1. a training unit configured to perform machine learning for a plurality of times by setting, for each of a plurality of the training data, a plurality of sets of training parameters for a plurality of times, and generate a plurality of trained models; / 19. a training step configured to perform machine learning for a plurality of times by setting, for each of a plurality of the training data, a plurality of sets of training parameters for a plurality of times, and generating a plurality of trained models; (Wang: [0051] Referring to FIG. 2A, FIG. 2A shows a flowchart of a travel time prediction method according to another embodiment of this application. The method may include the following steps: [0052]-[0055], [0056] Step 203: Use the whole route feature as input to a travel time calculation model, and use the travel time calculation model to calculate the estimated travel time of the target travel route, where the travel time calculation model is obtained through training according to historical travel data. [0057] The travel time calculation model is a mathematical model used for predicting a travel time of a travel route. In addition, input of the travel time calculation model is a whole route feature of the travel route, and output is an estimated travel time of the travel route. In this embodiment, during the process of training the travel time calculation model based on massive historical travel data, each feature item is fully considered, so that the travel time calculation model obtained through training can calculate the estimated travel time more accurately. Moreover, the estimated travel time is calculated by modeling, so that a calculation process is more direct and simple. [0058] Optionally, as shown in FIG. 2B, the following step is used for obtaining a travel time calculation model: [0059] Step 21: Build a training sample set according to historical travel data. [0060] The training sample set includes multiple training samples, and each training sample includes: a whole route feature corresponding to a historical travel route and an actual travel time of the historical travel route. [0061] In an example, referring to FIG. 2C, FIG. 2C shows a schematic diagram of a building process of a training sample set.)
1. a model evaluation unit configured to evaluate whether trained results of the plurality of trained models are good or bad and display evaluated results; / 19. a model evaluation step of evaluating whether trained results of the plurality of trained models are good or bad and displaying evaluated results; (Wang: [0059] Step 21: Build a training sample set according to historical travel data. [0060] The training sample set includes multiple training samples, and each training sample includes: a whole route feature corresponding to a historical travel route and an actual travel time of the historical travel route. [0061] In an example, referring to FIG. 2C, FIG. 2C shows a schematic diagram of a building process of a training sample set. A server prepares massive historical travel data in advance; route extraction is performed on the massive historical travel data, to select some historical travel routes satisfying a target condition; for each selected historical travel route, a training sample is built according to a whole route feature corresponding to the historical travel route and an actual travel time of the historical travel route. To ensure a quality of the training sample, and to improve a quality of the travel time calculation model obtained through training, the target condition satisfied by the selected historical travel route includes but is not limited to at least one of the following: a total length of a travel route is relatively long, a feedback interval of a Global Positioning System (GPS) is small and stable and has no obvious jump and shift, or the travel route is mainly a high-grade road and has a low space-time coincidence degree. Optionally, a quantity of training samples in the training sample set is not less than ten percent of a total quantity of historical travel routes in the historical travel data. [0062] Step 22: Train a training sample by using a machine learning algorithm, to obtain a travel time calculation model.)
1. a model selection unit configured to select a trained model based on the evaluated results by the model evaluation unit;/ 19. a model selection step of selecting a trained model based on the evaluated results by the model evaluation unit;; (Wang: [0066] Optionally, to ensure prediction precision of the travel time calculation model, the following steps are used to verify the travel time calculation model: [0067] Step 23: Build a verification sample set according to the historical travel data. [0068] The verification sample set includes multiple verification samples, and the verification samples are used for verifying a model. The verification sample may also be referred to as a test sample. Each verification sample includes: a whole route feature corresponding to a historical travel route and an actual travel time of the historical travel route. A feature item extracted when a verification sample is built is the same as a feature item extracted when a training sample is built. [0069] For a manner of building the verification sample set, optionally, in the foregoing historical travel data prepared in advance, the server performs route extraction on historical travel data excluding the training samples, to obtain the verification sample set. Optionally, a quantity of verification samples is not less than ten percent of a total quantity of historical travel routes in the historical travel data. [0070] Optionally, a training sample selected by the server satisfies a target condition included in step 201, so that low-quality training samples are prevented from causing a counter effect to a subsequent adjustment of the travel time calculation model.)
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify Yamaguchi with the teachings of Wang as they are both directed towards improvements in machine learning models for feature extraction and data analysis. Yamaguchi is directed towards feature analysis of image data for workpiece selection, while Wang is directed towards learning algorithms for time-related learning using different parameters such as historical data and road traffic information. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Yamaguchi in order to improve the overall machine learning algorithm for workpiece analysis to further incorporate in learning algorithms that take into account additional parameters such as time, in order to refine the accuracy of the detection and selection algorithm. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of Yamaguchi, while the teaching of Wang continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of further incorporating in time-related metrics and historical data into the machine learning model to improve the workpiece detection and selection. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Consider Claim 2.
The combination of Yamazaki and Wang teaches:
2. (Original) The machine learning device according to claim 1, wherein the model selection unit accepts a trained model selected by a user based on the evaluated results displayed by the model evaluation unit. (Yamazaki: [0101] In step S11, the annotation processing part 112 acquires the range image generated by measuring the workpieces 50 in bulk, from the three-dimensional measuring device 40. In step S12, the annotation processing part 112 makes the display part 12 display the range image. In step S13, the annotation processing part 112 draws the teaching position on the range image, on the basis of the teaching of the picking position of the workpiece 50 received from the user by the operation reception part 13. [0102] In step S14, the annotation processing part 112 sets the size of the point group information for matching. The size is set on the basis of a previously-given set value or user's operation. In step S15, the annotation processing part 112 generates the point group information for matching, on the basis of the setting performed in step S14. The annotation processing part 112 stores the generated point group information for matching in the selection data storage part 111. In step S16, the matching part 113 imparts an evaluation value to the point group information for matching, by performing the matching and the picking-up of the workpiece 50 by using the point group information for matching stored in the selection data storage part 111. The matching part 113 stores the point group information for matching having the imparted evaluation value in the selection data storage part 111.)
Consider Claim 3.
The combination of Yamazaki and Wang teaches:
3. (Original) The machine learning device according to claim 1, wherein the model selection unit selects a trained model based on the evaluated results by the model evaluation unit. (Yamazaki: [0103] In step S17, the matching part 113 determines whether or not the point group information for matching is required additionally. In the case where a predetermined number or more of pieces of point group information for matching each having a predetermined evaluation value or higher are stored, the determination is made as No in step S17, and the processing proceeds to step S18. While in the case where a predetermined number or more of pieces of point group information for matching each having a predetermined evaluation value or higher are not stored, the determination is made as Yes in step S17, the processing returns to step S11 so that the processing is repeated again. [0104] In step S18, the learning processing part 141 performs learning, by using as input data the point group information for matching stored by the learning part 14, and using as a label the evaluation value imparted to the point group information for matching. This builds a learned model and stores it in the learned model storage part 142.)
Consider Claim 4.
The combination of Yamazaki and Wang teaches:
4. (Currently Amended) The machine learning device according to claim 1,further comprising a parameter extraction unit, wherein the parameter extraction unit extracts important training parameters from among the plurality of training parameters, and the training unit performs machine learning based on the extracted training parameters, and generates the plurality of trained models. (Yamazaki: [0105] The operation in the clipping processing is described below with reference to FIG. 9B. In step S19, the clipping part 114 acquires the range image generated by measuring the workpieces 50 in bulk, from the three-dimensional measuring device 40. [0106] In step S20, the clipping part 114 clips out the range image in the same size as input in the learned model (that is, the same size as the point group information for matching) from the acquired entire range image, as a candidate picking position. Then, the clipping part 114 acquires the three-dimensional point group information of the candidate picking position. The clipping part 114 stores the acquired three-dimensional point group information of the candidate picking position in the selection data storage part 111. The estimation processing part 143 inputs in the learned model the respective pieces of three-dimensional point group information of all the candidate picking positions stored in the selection data storage part 111, and acquires as outputs the evaluation values with respect to the respective candidate picking positions. The estimation processing part 143 notifies the clipping part 114 of the outputs.)
Consider Claim 5.
The combination of Yamazaki and Wang teaches:
5. (Currently Amended) The machine learning device according to claim 1,wherein the model evaluation unit evaluates whether the trained models are good or bad, based on the inference result candidates generated by the inference calculation unit. (Yamazaki: [0107] In step S21, the clipping part 114 selects as a picking position the position stored in association with the clipped image having a high evaluation value output by the learned model. The clipping part 114 transmits the selected picking position to the robot controller 20. Then, the robot controller 20 controls the robot 30 on the basis of the received picking position so that picking up of the workpiece 50 is tried to be performed. As described above, as a result of trying to pick up the workpiece 50, the picking-up may succeed or may fail. Then, the clipping part 114 imparts an evaluation value to the candidate picking position depending on success or failure in picking up the workpiece 50. [0108] In step S22, the learning processing part 141 determines whether to update the learned model, by performing learning by using as teacher data the candidate picking position to which the evaluation value is imparted in step S21. In the case where the mini-batch learning is performed, and further where a predetermined number of pieces of teacher data have been recorded or where a predetermined period of time has elapsed since the previous learning, the determination is made as Yes in step S22, and the processing proceeds to step S23. While in the case where only a predetermined number or less of pieces of teacher data have been recorded or where a predetermined period of time has not elapsed since the previous learning, the determination is made as No in step S22, and the processing proceeds to step S24. It is noted that in the case where the online learning is performed, the determination is made as Yes in step S22, and the processing proceeds to step S23.)
Consider Claim 6.
The combination of Yamazaki and Wang teaches:
6. (Original) The machine learning device according to claim 5, wherein the model selection unit selects a trained model, based on the evaluated results by the model evaluation unit, that using the inference result candidates generated by the inference calculation unit. (Yamazaki: [0109] In step S23, the learning processing part 141 performs the learning described above, by using the three-dimensional point group data of the candidate picking position as input data, and using as a label the evaluation value imparted to the candidate picking position used as input data. This updates the learned model stored in the learned model storage part 142. [0110] In step S24, the clipping part 114 determines whether to continue the picking-up. In the case where there are some candidate picking positions each having the evaluation value and further where the picking-up is not performed with respect to the candidate picking positions, some workpieces 50 not having been picked up are considered to be left, and thus the determination is made as Yes in step S24, and the processing proceeds to step S19. While in the case where the picking-up has been performed with respect to all the candidate picking positions each having a high evaluation value, all the workpieces 50 are considered to have been picked up, and thus the determination is made as No in step S24, and the processing is terminated.)
Consider Claim 7.
The combination of Yamazaki and Wang teaches:
7. (Currently Amended) The machine learning device according to claim 1,wherein the inference calculation unit performs the inference calculation process based on trained models evaluated as good by the model evaluation unit, and generates the inference result candidates. (Examiner Note: matching part 113 performs matching based on each piece of point group information for matching and is analgous in scope to the inference calculation unit; Yamazaki: [0076] Herein, although the point group information for matching is generated on the basis of the teaching by the user as described above, all the pieces of point group information for matching are not necessarily appropriate. In an example, in some cases, the picking-up may succeed in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, but may fail in the case where a region having a high evaluation of matching with another piece of point group information for matching is set as a picking position. As described above, in some cases, the success or failure may depend on a piece of the point group information for matching. Thus, the matching part 113 may evaluate each piece of point group information for matching, thereby imparting an evaluation value to each piece of point group information for matching. Then, the matching part 113 preferably uses the point group information for matching having a high evaluation value. The matching part 113 stores the point group information for matching having the imparted evaluation values in the selection data storage part 111, in order that the point group information for matching having the imparted evaluation values is used as teacher data in the machine learning described later. It is noted that the point group information for matching having low evaluation values is also necessary as teacher data (failure data) for the machine learning. Therefore, the matching part 113 stores not only the point group information for matching having high evaluation values but also the point group information for matching having low evaluation values, as teacher data in the selection data storage part 111. [0077] The matching part 113 is capable of imparting an evaluation value depending on the success or failure in picking up the workpiece 50. In an example, in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, and further where the picking-up of the workpiece 50 succeeds, the matching part 113 imparts a higher evaluation value than the case where the picking-up fails. In an example, the matching part 113 imparts a first predetermined value or higher (for example, 60 points or higher) in the case where the picking-up of the workpiece 50 succeeds, and imparts a second predetermined value or lower (for example, 50 points or lower) in the case where the picking-up of the workpiece 50 fails. In an example, in the case where the picking-up of the workpiece 50 succeeds, the matching part 113 may impart an evaluation value further differently depending on the time taken for the picking-up. In an example, as the time taken for picking up the workpiece 50 is shorter, the matching part 113 may impart a higher evaluation value. In another example, in the case where the picking-up of the workpiece 50 fails, the matching part 113 may impart an evaluation value differently depending on the degree of failure. In an example, in the case where the workpiece 50 has been gripped but has fallen in the middle of the picking-up, the matching part 113 may impart a higher evaluation value than the case where the workpiece 50 has not been gripped. )
Consider Claim 8.
The combination of Yamazaki and Wang teaches:
8. (Currently Amended) The machine learning device according to claim 1,wherein the model selection unit selects the trained model based on the inference result candidates generated by the inference calculation unit. (Yamazaki: [0088]-[0089] The output layer outputs an evaluation value with respect to the picking-up based on the point group information for matching used as input data, on the basis of the output from the fully connected layer. Then, the error between the output from the output layer and the label is calculated. The label herein is the evaluation value imparted to the point group information for matching used as input data, as described above. [0090] At the start of the learning, since each parameter included in the convolution neural network is not appropriately weighted, the error may likely have a large value. Therefore, the learning processing part 141 corrects the weighting value so as to reduce the calculated error. Specifically, the processing called forward propagation or back propagation is repeated in order to reduce the error, thereby changing a weighting value of each perceptron included in the convolution neural network. [0091] In such a way, the learning processing part 141 learns the features of the teacher data, and inductively acquires the learning model for outputting the evaluation value from the point group information for matching used as input data. The learning processing part 141 stores the built learning model as a learned model in the learned model storage part 142.)
Consider Claim 9.
The combination of Yamazaki and Wang teaches:
9. (Currently Amended) The machine learning device according to claim 1,wherein when there is no output from the inference decision unit, the model selection unit newly selects one or more trained models from among the plurality of trained models, the inference calculation unit performs the inference calculation process based on the one or more trained models newly selected, and generates one or more new inference result candidates, and the inference decision unit outputs all, or a part, or a combination of the new inference result candidates. (Yamazaki: Figure 6, [0068]-[0069] The range size of the three-dimensional point group to be acquired as point group information for matching is previously set on the basis of the size of the workpieces 50 or the like. It is noted that the point group information for matching may be used in such a manner that the three-dimensional point group information having a somewhat large range is stored as point group information for matching, and thereafter the setting thereof is adjusted so that the point group information for matching is trimmed and acquired in a desired size when the stored point group information for matching is used in the matching described below. [0070] It is noted that the annotation processing part 112 may add additional information to the point group information for matching. For example, information indicating the features of the point group information for matching may be added. The information indicating features herein is, for example, information such as of an average of heights of the three-dimensional points of the plurality of pixels included in the point group information for matching, or the height of the three-dimensional points of the pixels corresponding to the teaching position. [0074] The following description is about the matching using the point group information for matching performed by the matching part 113 in order that the workpiece 50 is picked up. In order that the workpiece 50 is picked up, the matching part 113 acquires the range image of the workpieces 50 in bulk stacked in the container 60, from the three-dimensional measuring device 40. Then, the matching part 113 searches the acquired range image on the basis of the teaching position peripheral image data included in each piece of point group information for matching stored in the selection data storage part 111, by using a matching method of three-dimensional point groups, for example, ICP (Iterative Closest Point) matching. Then, the matching part 113 selects, for example, the central position of the image region having a high evaluation of matching in the range image, as the picking position of the workpiece 50 to be picked up. It is noted that the matching part 113 may select the plurality of image regions whose evaluations of matching are higher than a threshold value, and may use the image region having the highest height from the ground among the plurality of image regions, as new teaching position peripheral image data.)
Consider Claim 10.
The combination of Yamazaki and Wang teaches:
10. (Currently Amended) The machine learning device according to claim 1,wherein the training unit performs machine learning based on a plurality of sets of the training data. (Yamazaki: [0088]-[0089] The output layer outputs an evaluation value with respect to the picking-up based on the point group information for matching used as input data, on the basis of the output from the fully connected layer. Then, the error between the output from the output layer and the label is calculated. The label herein is the evaluation value imparted to the point group information for matching used as input data, as described above. [0090] At the start of the learning, since each parameter included in the convolution neural network is not appropriately weighted, the error may likely have a large value. Therefore, the learning processing part 141 corrects the weighting value so as to reduce the calculated error. Specifically, the processing called forward propagation or back propagation is repeated in order to reduce the error, thereby changing a weighting value of each perceptron included in the convolution neural network. [0091] In such a way, the learning processing part 141 learns the features of the teacher data, and inductively acquires the learning model for outputting the evaluation value from the point group information for matching used as input data. The learning processing part 141 stores the built learning model as a learned model in the learned model storage part 142.)
Consider Claim 11.
The combination of Yamazaki and Wang teaches:
11. (Currently Amended) The machine learning device according to claim 1,wherein the acquisition unit acquires, image data of an area where a plurality of workpieces are present, as the training data and the inference data, and the training data includes teaching data of at least one characteristic of the workpieces appeared on the image data. (Yamazaki: [0068] FIG. 6 shows one example of the point group information for matching as point group information for matching 80. Although a teaching point is drawn merely for explanation in the example of FIG. 6, the teaching position peripheral image data included in the point group information for matching 80 includes no teaching position drawn. In the following description, the indication of “80” which is the reference numeral given to the point group information for matching will be omitted. [0069] The range size of the three-dimensional point group to be acquired as point group information for matching is previously set on the basis of the size of the workpieces 50 or the like. It is noted that the point group information for matching may be used in such a manner that the three-dimensional point group information having a somewhat large range is stored as point group information for matching, and thereafter the setting thereof is adjusted so that the point group information for matching is trimmed and acquired in a desired size when the stored point group information for matching is used in the matching described below. [0070] It is noted that the annotation processing part 112 may add additional information to the point group information for matching. For example, information indicating the features of the point group information for matching may be added. The information indicating features herein is, for example, information such as of an average of heights of the three-dimensional points of the plurality of pixels included in the point group information for matching, or the height of the three-dimensional points of the pixels corresponding to the teaching position.)
Consider Claim 12.
The combination of Yamazaki and Wang teaches:
12. (Currently Amended) The machine learning device according to claim 1,wherein the acquisition unit acquires, three-dimensional measurement data of an area where a plurality of workpieces are present, as the training data and the inference data; and the training data includes teaching data of at least one characteristic of the workpieces appeared in the three-dimensional measurement data. (Yamazaki: [0071] In the first embodiment, since the plurality of workpieces 50 are stacked in bulk, the plurality of teaching positions are taught by the user as described above by use of one piece of range image obtained by measurement by the three-dimensional measuring device 40, thereby enabling to teach the picking positions with respect to the possible respective postures of the workpieces 50. [0073] On the other hand, in the first embodiment, the range image based on the three-dimensional points acquired under the actual optical conditions is acquired as input, and thus the three-dimensional point group information of the periphery of the drawing position can be stored, thereby enabling to prevent such a trouble that, as in the case of the teaching with a CAD model, the teaching position of the workpiece 50 corresponding to the teaching position on the CAD model cannot be acquired due to the optical conditions and the like. [0074] The following description is about the matching using the point group information for matching performed by the matching part 113 in order that the workpiece 50 is picked up. In order that the workpiece 50 is picked up, the matching part 113 acquires the range image of the workpieces 50 in bulk stacked in the container 60, from the three-dimensional measuring device 40. Then, the matching part 113 searches the acquired range image on the basis of the teaching position peripheral image data included in each piece of point group information for matching stored in the selection data storage part 111, by using a matching method of three-dimensional point groups, for example, ICP (Iterative Closest Point) matching. Then, the matching part 113 selects, for example, the central position of the image region having a high evaluation of matching in the range image, as the picking position of the workpiece 50 to be picked up. It is noted that the matching part 113 may select the plurality of image regions whose evaluations of matching are higher than a threshold value, and may use the image region having the highest height from the ground among the plurality of image regions, as new teaching position peripheral image data.)
Consider Claim 13.
The combination of Yamazaki and Wang teaches:
13. (Currently Amended) The machine learning device according to claim 11, wherein the training unit performs machine learning based on the training data, and the inference calculation unit generates inference result candidates including information about the at least one characteristic of the workpieces. (Yamazaki: [0071] In the first embodiment, since the plurality of workpieces 50 are stacked in bulk, the plurality of teaching positions are taught by the user as described above by use of one piece of range image obtained by measurement by the three-dimensional measuring device 40, thereby enabling to teach the picking positions with respect to the possible respective postures of the workpieces 50. [0073] On the other hand, in the first embodiment, the range image based on the three-dimensional points acquired under the actual optical conditions is acquired as input, and thus the three-dimensional point group information of the periphery of the drawing position can be stored, thereby enabling to prevent such a trouble that, as in the case of the teaching with a CAD model, the teaching position of the workpiece 50 corresponding to the teaching position on the CAD model cannot be acquired due to the optical conditions and the like. [0074] The following description is about the matching using the point group information for matching performed by the matching part 113 in order that the workpiece 50 is picked up. In order that the workpiece 50 is picked up, the matching part 113 acquires the range image of the workpieces 50 in bulk stacked in the container 60, from the three-dimensional measuring device 40. Then, the matching part 113 searches the acquired range image on the basis of the teaching position peripheral image data included in each piece of point group information for matching stored in the selection data storage part 111, by using a matching method of three-dimensional point groups, for example, ICP (Iterative Closest Point) matching. Then, the matching part 113 selects, for example, the central position of the image region having a high evaluation of matching in the range image, as the picking position of the workpiece 50 to be picked up. It is noted that the matching part 113 may select the plurality of image regions whose evaluations of matching are higher than a threshold value, and may use the image region having the highest height from the ground among the plurality of image regions, as new teaching position peripheral image data.)
Consider Claim 14.
The combination of Yamazaki and Wang teaches:
14. (Currently Amended) The machine learning device according to claim 1,wherein the acquisition unit acquires, image data of an area where a plurality of workpieces are present, as the training data and the inference data, and the training data includes teaching data of at least one picking position for the workpieces appeared on the image data. (Yamazaki: [0039] The image processing apparatus 10 a acquires and stores the range image data of the periphery of the picking position (hereinafter referred to as “teaching position peripheral image data”) taught by the user. It is noted that the suitable position as a candidate picking position herein is, for example, a position where the height of its three-dimensional point is high. [0040] In the case of selecting the picking position of the workpiece 50 thereafter, the image processing apparatus 10 a searches the range image for the image region matched to the stored teaching position peripheral image data. [0041] The image processing apparatus 10 a further performs machine learning on the basis of the teaching result, and builds a learning model for selecting the picking position of the workpiece 50. Then, the image processing apparatus 10 a selects a new picking position from the range image on the basis of the built learning model. The robot system 1 a with such a configuration is capable of selecting the picking position with higher accuracy. The outline of the robot system 1 a has been described so far. The following descriptions are about respective devices included in the robot system 1 a. [0068] FIG. 6 shows one example of the point group information for matching as point group information for matching 80. Although a teaching point is drawn merely for explanation in the example of FIG. 6, the teaching position peripheral image data included in the point group information for matching 80 includes no teaching position drawn. In the following description, the indication of “80” which is the reference numeral given to the point group information for matching will be omitted.)
Consider Claim 15.
The combination of Yamazaki and Wang teaches:
15. (Currently Amended) The machine learning device according to claim 1,wherein the acquisition unit acquires, three-dimensional measurement data of an area where a plurality of workpieces are present, as the training data and the inference data, and the training data includes teaching data of at least one picking position for the workpieces appeared in the three-dimensional measurement data. (Yamazaki: [0039] The image processing apparatus 10 a acquires and stores the range image data of the periphery of the picking position (hereinafter referred to as “teaching position peripheral image data”) taught by the user. It is noted that the suitable position as a candidate picking position herein is, for example, a position where the height of its three-dimensional point is high. [0040] In the case of selecting the picking position of the workpiece 50 thereafter, the image processing apparatus 10 a searches the range image for the image region matched to the stored teaching position peripheral image data. [0041] The image processing apparatus 10 a further performs machine learning on the basis of the teaching result, and builds a learning model for selecting the picking position of the workpiece 50. Then, the image processing apparatus 10 a selects a new picking position from the range image on the basis of the built learning model. The robot system 1 a with such a configuration is capable of selecting the picking position with higher accuracy. The outline of the robot system 1 a has been described so far. The following descriptions are about respective devices included in the robot system 1 a. [0068] FIG. 6 shows one example of the point group information for matching as point group information for matching 80. Although a teaching point is drawn merely for explanation in the example of FIG. 6, the teaching position peripheral image data included in the point group information for matching 80 includes no teaching position drawn. In the following description, the indication of “80” which is the reference numeral given to the point group information for matching will be omitted.)
Consider Claim 16.
The combination of Yamazaki and Wang teaches:
16. (Currently Amended) The machine learning device according to claim 14, wherein the training unit performs machine learning based on the training data, and the inference calculation unit generates inference result candidates including information about the at least one picking position for the workpieces. (Yamazaki: [0039] The image processing apparatus 10 a acquires and stores the range image data of the periphery of the picking position (hereinafter referred to as “teaching position peripheral image data”) taught by the user. It is noted that the suitable position as a candidate picking position herein is, for example, a position where the height of its three-dimensional point is high. [0040] In the case of selecting the picking position of the workpiece 50 thereafter, the image processing apparatus 10 a searches the range image for the image region matched to the stored teaching position peripheral image data. [0041] The image processing apparatus 10 a further performs machine learning on the basis of the teaching result, and builds a learning model for selecting the picking position of the workpiece 50. Then, the image processing apparatus 10 a selects a new picking position from the range image on the basis of the built learning model. The robot system 1 a with such a configuration is capable of selecting the picking position with higher accuracy. The outline of the robot system 1 a has been described so far. The following descriptions are about respective devices included in the robot system 1 a. [0068] FIG. 6 shows one example of the point group information for matching as point group information for matching 80. Although a teaching point is drawn merely for explanation in the example of FIG. 6, the teaching position peripheral image data included in the point group information for matching 80 includes no teaching position drawn. In the following description, the indication of “80” which is the reference numeral given to the point group information for matching will be omitted.)
Consider Claim 17.
The combination of Yamazaki and Wang teaches:
17. (Original) The machine learning device according to claim 16, wherein the model evaluation unit receives, from a control device comprising a motion execution unit causing a robot with a hand for picking out the workpieces to execute motions of picking out the workpieces by the hand, execution results of the picking motions by the motion execution unit based on results of inference of the at least one picking position for the workpieces outputted by the machine learning device, and evaluates whether the trained results of the plurality of trained models are good or bad based on the execution results of the picking motions. (Yamazaki: Yamazaki: [0076] Herein, although the point group information for matching is generated on the basis of the teaching by the user as described above, all the pieces of point group information for matching are not necessarily appropriate. In an example, in some cases, the picking-up may succeed in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, but may fail in the case where a region having a high evaluation of matching with another piece of point group information for matching is set as a picking position. As described above, in some cases, the success or failure may depend on a piece of the point group information for matching. Thus, the matching part 113 may evaluate each piece of point group information for matching, thereby imparting an evaluation value to each piece of point group information for matching. Then, the matching part 113 preferably uses the point group information for matching having a high evaluation value. The matching part 113 stores the point group information for matching having the imparted evaluation values in the selection data storage part 111, in order that the point group information for matching having the imparted evaluation values is used as teacher data in the machine learning described later. It is noted that the point group information for matching having low evaluation values is also necessary as teacher data (failure data) for the machine learning. Therefore, the matching part 113 stores not only the point group information for matching having high evaluation values but also the point group information for matching having low evaluation values, as teacher data in the selection data storage part 111. [0077] The matching part 113 is capable of imparting an evaluation value depending on the success or failure in picking up the workpiece 50. In an example, in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, and further where the picking-up of the workpiece 50 succeeds, the matching part 113 imparts a higher evaluation value than the case where the picking-up fails. In an example, the matching part 113 imparts a first predetermined value or higher (for example, 60 points or higher) in the case where the picking-up of the workpiece 50 succeeds, and imparts a second predetermined value or lower (for example, 50 points or lower) in the case where the picking-up of the workpiece 50 fails. In an example, in the case where the picking-up of the workpiece 50 succeeds, the matching part 113 may impart an evaluation value further differently depending on the time taken for the picking-up. In an example, as the time taken for picking up the workpiece 50 is shorter, the matching part 113 may impart a higher evaluation value. In another example, in the case where the picking-up of the workpiece 50 fails, the matching part 113 may impart an evaluation value differently depending on the degree of failure. In an example, in the case where the workpiece 50 has been gripped but has fallen in the middle of the picking-up, the matching part 113 may impart a higher evaluation value than the case where the workpiece 50 has not been gripped.)
Consider Claim 18.
The combination of Yamazaki and Wang teaches:
18. (Currently Amended) The machine learning device according to claim 16, wherein the model selection unit receives, from a control device comprising a motion execution unit controlling a robot with a hand for picking out the workpieces to execute motions of picking out the workpieces by the hand, execution results of the picking motions by the motion execution unit based on results of inference of the at least one picking position for the workpieces outputted by the machine learning device, and selects a trained model based on the execution results of the picking motions. (Yamazaki: [0076] Herein, although the point group information for matching is generated on the basis of the teaching by the user as described above, all the pieces of point group information for matching are not necessarily appropriate. In an example, in some cases, the picking-up may succeed in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, but may fail in the case where a region having a high evaluation of matching with another piece of point group information for matching is set as a picking position. As described above, in some cases, the success or failure may depend on a piece of the point group information for matching. Thus, the matching part 113 may evaluate each piece of point group information for matching, thereby imparting an evaluation value to each piece of point group information for matching. Then, the matching part 113 preferably uses the point group information for matching having a high evaluation value. The matching part 113 stores the point group information for matching having the imparted evaluation values in the selection data storage part 111, in order that the point group information for matching having the imparted evaluation values is used as teacher data in the machine learning described later. It is noted that the point group information for matching having low evaluation values is also necessary as teacher data (failure data) for the machine learning. Therefore, the matching part 113 stores not only the point group information for matching having high evaluation values but also the point group information for matching having low evaluation values, as teacher data in the selection data storage part 111. [0077] The matching part 113 is capable of imparting an evaluation value depending on the success or failure in picking up the workpiece 50. In an example, in the case where a region having a high evaluation of matching with a certain piece of point group information for matching is set as a picking position, and further where the picking-up of the workpiece 50 succeeds, the matching part 113 imparts a higher evaluation value than the case where the picking-up fails. In an example, the matching part 113 imparts a first predetermined value or higher (for example, 60 points or higher) in the case where the picking-up of the workpiece 50 succeeds, and imparts a second predetermined value or lower (for example, 50 points or lower) in the case where the picking-up of the workpiece 50 fails. In an example, in the case where the picking-up of the workpiece 50 succeeds, the matching part 113 may impart an evaluation value further differently depending on the time taken for the picking-up. In an example, as the time taken for picking up the workpiece 50 is shorter, the matching part 113 may impart a higher evaluation value. In another example, in the case where the picking-up of the workpiece 50 fails, the matching part 113 may impart an evaluation value differently depending on the degree of failure. In an example, in the case where the workpiece 50 has been gripped but has fallen in the middle of the picking-up, the matching part 113 may impart a higher evaluation value than the case where the workpiece 50 has not been gripped.)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA N ANSARI whose telephone number is (571)270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O' NEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TAHMINA N. ANSARI
Examiner
Art Unit 2672
2672
March 26, 2026
/TAHMINA N ANSARI/Primary Examiner, Art Unit 2674