Prosecution Insights
Last updated: April 19, 2026
Application No. 18/003,385

INFORMATION PROCESSING APPARATUS AND METHOD

Final Rejection §103
Filed
Dec 27, 2022
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Rakuten Group Inc.
OA Round
3 (Final)
76%
Grant Probability
Favorable
4-5
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the application filed on December 22, 2025. Claims 1-5, and 7-19 are amended. Claims 6 is canceled Thus, claims 1-5 and 7-19 are pending for examination in this application. Priority Receipt is acknowledged that application is a National Stage application of PCT/JP2021/048570 with a priority date of December 27, 2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The information disclosure statement (IDS) submitted on January 12, 2026 is being considered by the examiner. The IDS submitted on December 27, 2022 has references 13-16, 19, 21-24, and 27 lined through as a copy of the translation if a written English-language translation of a non-English-language document, or portion thereof, is within the possession, custody, or control of, or is readily available to any individual designated in § 1.56(c) is required (see MPEP 609, 37 CFR 1.98(a)(3)(ii)). Response to Amendments Applicant’s remarks and amendments filed December 22, 2025 , have been entered. Applicant’s arguments regarding the objection to the title previously set forth in the Non-Final Office Action mailed September 22, 2025, are persuasive. Accordingly, the objection to the title is withdrawn in response. Applicant’s arguments regarding the 35 U.S.C. 112(f) interpretations previously set forth in the Non-Final Office Action mailed September 22, 2025, are persuasive. Accordingly, the 35 U.S.C. 112(f) interpretations are withdrawn in response. Applicant’s arguments regarding the 35 U.S.C. 112(b) rejections of claims 2, 3, and 14 previously set forth in the Non-Final Office Action mailed September 22, 2025, are persuasive. Accordingly, the 35 U.S.C. 112(b) rejections are withdrawn in response. Applicant’s arguments regarding the 35 U.S.C. 101 rejections previously set forth in the Non-Final Office Action mailed September 22, 2025, are persuasive. Accordingly, the 35 U.S.C. 101 rejections are withdrawn in response. Response to Arguments Applicant’s arguments filed December 22, 2025, regarding the rejection(s) of claim(s) 1-1-5 and 7-19 have been fully and completely considered but are moot because the arguments do not apply to the new combination of the references, facilitated by Applicant’s newly submitted amendments, including new prior art— Ito et al, US 20180068452—being used in the current rejection. Claim Objections Claim 1 is objected to because of the following informalities: “image code,” should be changed to “image acquisition code”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shuai et al., CN 109377479 in view of Ito et al, US 20180068452 in view of Muthukumar et al, WO 2021204350. Regarding claim 1, Shuai teaches an information processing apparatus comprising: at least one memory configured to store program code; and at least one processor configured to operate as instructed by the program code, the program code comprising (”see Shuai, Paragraph [0037], “1. Image collection,” a processor and memory are needed to collect and store the image and Paragraph [0044], “4. Model training,” use of a processor to train the model”): image code configured to cause the at least one processor to acquire an image used as teacher data for machine learning (see Shuai, Paragraphs [0036]-[0037], “1. Image collection … Images of the longitude and latitude at different times are downloaded to obtain the remote sensing image of the butterfly satellite antenna, as shown in Figure 2,” image collection and images downloaded is considered to be an image acquisition unit), the image being with one or a plurality of annotations for showing a position in the image at which a predetermined object is shown (see Shuai, Paragraphs [0038]-[0039], “2. Image Annotation. Considering that the present invention adopts a two-stage target detection method, frame annotation and pixel-by-pixel annotation are designed respectively, and two types of annotations are performed on the antenna remote sensing image to form corresponding annotation files, as shown in Figures 3 and 4, respectively providing annotation data for the model training process required by the two-stage detection method”, and [0045], “The butterfly satellite antenna remote sensing image and the corresponding pixel-by-pixel annotation obtained by annotating the file are used to train the fully convolutional deep learning network, and the deep learning model required for the first stage detection is obtained, as shown in Figure 5,” two-stage target detection is considered to be predetermined object to be detected; frame annotation and pixel-by-pixel annotation are considered to be a plurality of annotations showing a position in the image which a predetermined object is shown); region specification code configured to cause the at least one processor to specify a region in which plurality of annotations satisfy a predetermined criterion in the image (see Shuai, Paragraph [0047], “The edge of the pixel-level detection result is expanded to obtain a rectangular area containing all pixels. The rectangular area is accurately positioned and detected using the deep learning model required for the second stage of detection, and finally the accurate position of the butterfly satellite antenna target is obtained, as shown in Figure 8,” a rectangular area is considered to be a region), edge detection code configured to cause the at least one processor to preferentially detects edges in the specified region or a range set on a basis of the region (see Shuai, Paragraph [0047], “The edge of the pixel-level detection result is expanded to obtain a rectangular area containing all pixels,” edge of pixel-level detection result is considered to be edge detection); and machine learning code configured to cause the at least one processor to generate a learning model for detecting the predetermined object in an image by performing machine learning using teacher data including an image corrected by the annotation correction unit (see Shuai, Paragraph [0045], “the butterfly satellite antenna remote sensing image and the corresponding box annotation obtained by annotating the file are used to train the deep convolutional neural network of the Faster R-CNN architecture, and the deep learning model required for the second stage detection is obtained, as shown in Figure 6,” the deep convolutional neural network is considered to be a machine learning code; the annotated files are used to train the deep convolutional neural network are considered to be performing machine learning using teacher data). Shuai does not expressively teach wherein the predetermined criterion includes whether density of the plurality of annotations reach a threshold However, Ito in a similar invention in the same field of endeavor teaches wherein the predetermined criterion includes whether density of the plurality of annotations reach a threshold (see Ito, Paragraph [0342], “on the basis of the number of labels (number of provisional peak labels) of region label data of a label region and a plurality of predetermined threshold values of numbers of labels, and allocates region label data of a label region to each region label integration unit 18 on the basis of the estimation results”); The combination of Shuai and Ito are analogous art because they are both in the same field of endeavor of marking an object in an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to allocate region label data of a label region to each region based on the number of labels of region label data of a label region and a plurality of predetermined threshold values as taught in the image processing device of Ito in the method of Shuai so that the computational load to integrate the label data are equalized (see Ito Abstract). Shuai in view of Ito does not expressively teach annotation correction code configured to cause the at least one processor to correct the plurality of annotations so as to be along the detected edges However, Muthukumar in a similar invention in the same field of endeavor teaches annotation correction code configured to cause the at least one processor to correct the plurality of annotations so as to be along the detected edges (see Muthukumar , Paragraphs [0023]-[0024], “the adjusting of the position of the marking comprises adjusting the position of the marking of the object being subjected to edge detection of a masked image in the sequence with respect to a previous image in the sequence, wherein the marking is considered to be aligned with the object being subjected to edge detection when a number of overlapping pixels between the marking and the object exceeds an overlap threshold value or when a maximum number of overlapping pixels is acquired,” adjusting the position of the marking is considered to be annotation correction). The combination of Shuai, Ito, and Muthukumar are analogous art because they are all in the same field of endeavor of marking an object in an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adjust the position of the markings of the object with respect to a previous image in the sequence and aligned based on an overlap threshold, obtain an optimal position of the bounding box, determine whether or not the bounding box is aligned with position of the object, move the bounding box until it overlaps with the contours of the object, and perform annotation of an object with a processing unit as taught in the device of Muthukumar in the in the method of Shuai in view of Ito to reduce cost of visual data annotation and allows image and video analytics infrastructure to be built faster and cheaper; creating of training data creation is currently the limitation factor for these applications (see Muthukumar Paragraph [0012]). Regarding claim 2, Shuai in view of Ito in view of Muthukumar further teaches the information processing apparatus according to claim 1, wherein the region specification code is further configured to cause the at least one processor to specify the region in which an amount of the plurality of annotations satisfies the predetermined criterion in the image (see Shuai, Paragraph [0051], “The present invention realizes the automatic detection of remote sensing antenna targets, and adopts the detection rate and the false alarm rate as the final measurement indicators, wherein the detection rate is the ratio of the number of antenna targets detected by the algorithm to the total number of targets actually contained in the image, and the false alarm rate is the ratio of the number of non-antenna targets detected by the algorithm to the total number of targets actually contained in the image,” in order to calculate a detection rate there must be a predetermined criterion to differentiate between actual antenna targets and non-antenna targets). The rationale of claim 1 has been applied herein. Regarding claim 3, Shuai in view of Ito in view of Muthukumar further teaches the information processing apparatus according to claim 1, wherein the region specification code is further configured to cause the at least one processor to specify the region in which positions of the plurality of annotations are related (see Muthukumar , Paragraph [0083], “The optimal position for the BB in terms of alignment with the mask Y is determined as the position where the defined above error is minimized. In other words, the position of the BB where there is a maximum overlap of is with the object (and thus the mask Y), and Paragraph [0085], “It is noted that when the objects to be annotated have irregular shapes or contours, alternatives to using BB annotation may be envisaged such as e.g. polygon annotation or semantic per-pixel segmentation. [0086] If any of these alternative annotation approaches are utilized, it may be beneficial to modify the procedure of searching for maximum overlap between edge pixels of the BB and the object being subjected to edge detection as described with reference to Figures 9a-c to maximizing the overlap between pixels of image areas covered by the BB and the object being subjected to edge detection,” the BB is considered to be a plurality of annotations; the optimal position for BB (bounding box) is considered a plurality of annotations are related). The rationale of claim 1 has been applied herein. Regarding claim 4, Shuai in view of Ito in view of Muthukumar further teaches the information processing apparatus according to claim 1, wherein the program code further comprises: estimation code configured to cause the at least one processor to estimate positions at which the plurality of annotations were intended to be made, on a basis of the detected edges, wherein the annotation correction code is further configured to cause the at least one processor to moves the positions of the plurality of annotations to the estimated positions (see Muthukumar , Paragraph [0080], “Figure 9b illustrates the mask Y of Figure 6a together with the BB, which in this example has drifted and is not aligned with the mask Y (and is thus not aligned with the object represented by the mask). Hence, the BB should be moved to a position where the BB indeed is aligned with the object to be marked in the masked image ISEG. In Figure 9b, the BB consists of the pixels enclosed by dashed lines while the mask Y consists of the pixels enclosed by continuous lines,” when the BB is not aligned with the mask thus not aligned with the object represented by the mask the BB is moved to a position where the BB is aligned which is considered to be wherein the annotation correction unit moves the positions of the annotations to the positions estimated by the estimation code). The rationale of claim 1 has been applied herein. Regarding claim 5, Shuai in view of Ito in view of Muthukumar further teaches the information processing apparatus according to claim 1, wherein the annotation correction code is further configured to cause the at least one processor to move the positions of the plurality of annotations to positions of edges closest to the plurality of annotations (see Muthukumar , Paragraph [0081], “Thus, as illustrated in Figure 9c, the BB is moved until it overlaps with the contours of the object as represented by the mask Y. In practice, the BB is moved in every direction, but by no more than X pixels where X can be considered to represent a search area around an initial BB position location. In the case of consecutive images in the sequence, X is a small number since the object does not move much from one image to another, unless a large instant movement of the camera is undertaken which may result in a greater number X,” the BB is moved until it overlaps with the contours of the object is considered to be moves the positions of the annotations to positions of edges closest to the annotations). The rationale of claim 1 has been applied herein. Regarding claim 7, Shuai in view of Ito in view of Muthukumar further teaches the information processing apparatus according to claim 1, wherein the program code further comprises: object acquisition code configured to cause the at least one processor to acquires an image to be processed (see Muthukumar , Paragraph [0045], “Figure 2 illustrates a flowchart of a method of facilitating annotation of an object in a sequence of images according to an embodiment. The method may be performed by a processing unit of a computer such as a laptop or desktop forming a workplace of the system operator along with a screen on which the sequence of images can be displayed,” a method for facilitating annotation of an object in a sequence of images performed by the processing unit is considered to a processing object acquisition unit that acquires an image to be processed); and object detection code configured to cause the at least one processor to detects, by using the learning model, the predetermined object in the image to be processed (see Shuai, Paragraph [0018], “The remote sensing image of the butterfly satellite antenna to be detected is pre-detected using the deep learning model required for the first stage of detection to obtain pixel-level detection results. The pixel-level detection results are edge expanded to obtain a rectangular area containing all pixels. The rectangular area is accurately positioned and detected using the deep learning model required for the second stage of detection to obtain the accurate position of the butterfly satellite antenna target,” the butterfly satellite antenna to be detected is considered to be the predetermined object in the image to be processed). The rationale of claim 6 has been applied herein. As per Claim 18, Claim 18 claims a method comprising: an image acquisition step of acquiring an image used as teacher data for machine learning as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. Claim(s) 8-9, is/are rejected under 35 U.S.C. 103 as being unpatentable over Shuai et al., CN 109377479 in view of Ito et al, US 20180068452 in view of Muthukumar et al, WO 2021204350 in further view of Kobayashi JP 2021077921. Regarding claim 8, Shuai in view of Ito in view of Muthukumar does not expressively teach wherein the program code further comprises angle calculation code configured to cause the at least one processor to calculate an angle of a detected object relative to a predetermined reference in the image to be processed. However, Kobayashi in a similar invention in the same field of endeavor teaches wherein the program code further comprises angle calculation code configured to cause the at least one processor to calculate an angle of a detected object relative to a predetermined reference in the image to be processed (see Kobayashi, Paragraph [0054], “The calculation unit 21 is intended to calculate the position and direction of the array antenna 5 from antenna shape information consisting of the external shape state of the array antenna 5 installed at the desired location and angle, and can also calculate the angle at which the antenna 5 should be adjusted, which is necessary for the correction, when correcting the direction in which the radio wave emission surface 55 of the array antenna 5 is facing, ” the calculation unit 21 is considered to be the angle calculation unit). The combination of Shuai, Ito, Muthukumar, and Kobayashi are analogous art because they are all in the same field of endeavor of marking an object in an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed calculate the position and direction of the antenna to calculate the angle, and perform image recognition of the horizontal side (horizontal axis) and vertical side (vertical axis) as the outer edges of the front and back sides of housing 51 of array antenna 5 as taught in the method of Kobayashi in the method of Shuai in view of Ito in view of Muthukumar to make it easy to check whether the array antenna is pointing in the expected direction, which makes construction more efficient, shortens work hours, and reduces the number of workers required (reducing costs) (see Kobayashi Paragraph [0033]). Regarding claim 9, Shuai in view of Ito in view of Muthukumar in further view of Kobayashi further teaches the information processing apparatus according to claim 8, wherein the angle calculation code is further configured to cause the at least one processor to calculate the angle of the detected object relative to any of a predetermined compass direction, a vertical direction, and a horizontal direction in the image to be processed (see Kobayashi, Paragraph [0073], “a method may be used in which the image recognition unit 22 performs image recognition of the horizontal side (horizontal axis) and vertical side (vertical axis) as the outer edges of the front and back sides of housing 51 of array antenna 5, and the position of array antenna 5 and the direction in which it is pointing are calculated by calculation unit 21 from their inclination and length in space. There are various known techniques for calculating the position of array antenna 5 and the direction in which it is pointing from the image recognition of array antenna 5, and the method is not limited to any particular method as long as such calculations are possible,” the horizontal side (horizontal axis) and vertical side (vertical axis) are considered to be predetermined compass direction). The rationale of claim 8 has been applied herein. Claim(s) 10 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shuai et al., CN 109377479 in view of Ito et al, US 20180068452 in view of Kobayashi JP 2021077921. Regarding claim 10, Shuai teaches an information processing apparatus comprising: at least one memory configured to store program code; and at least one processor configured to operate as instructed by the program code, the program code comprising (”see Shuai, Paragraph [0037], “1. Image collection,” a processor and memory are needed to collect and store the image and Paragraph [0044], “4. Model training,” use of a processor to train the model”: a processing object acquisition code configured to cause the at least one processor to acquire an image to be processed (see Shuai, Paragraphs [0036]-[0037], “1. Image collection … Images of the longitude and latitude at different times are downloaded to obtain the remote sensing image of the butterfly satellite antenna, as shown in Figure 2,” image collection and images downloaded is considered to be an a processing object acquisition code); an object detection code configured to case the at least one processor to detect, by using a learning model for detecting the predetermined object in an image, an antenna device installed outdoors as a predetermined object in the image to be processed, the learning model being generated by machine learning using teacher data including an image with one or a plurality of annotations for showing a position in the image at which the predetermined object is shown (see Shuai, Paragraphs [0038]-[0039], “2. Image Annotation. Considering that the present invention adopts a two-stage target detection method and Paragraph [0045], “The butterfly satellite antenna remote sensing image and the corresponding pixel-by-pixel annotation obtained by annotating the file are used to train the fully convolutional deep learning network, and the deep learning model required for the first stage detection is obtained, as shown in Figure 5,” two-step target detection is considered object detection; deep learning network is considered to be a learning model; butterfly satellite antenna is antenna device installed outdoors; image annotation is considered to be a plurality of annotations), a region within the image being specified (see Shuai, Paragraph [0047], “The edge of the pixel-level detection result is expanded to obtain a rectangular area containing all pixels. The rectangular area is accurately positioned and detected using the deep learning model required for the second stage of detection, and finally the accurate position of the butterfly satellite antenna target is obtained, as shown in Figure 8,” a rectangular area is considered to be a region) Shuai does not expressively teach when the plurality of annotation satisfy a predetermined criterion that includes whether density of a plurality of annotations reach a threshold; However, Ito in a similar invention in the same field of endeavor teaches when the plurality of annotation satisfy a predetermined criterion that includes whether density of a plurality of annotations reach a threshold (see Ito, Paragraph [0342], “on the basis of the number of labels (number of provisional peak labels) of region label data of a label region and a plurality of predetermined threshold values of numbers of labels, and allocates region label data of a label region to each region label integration unit 18 on the basis of the estimation results”); The combination of Shuai and Ito are analogous art because they are both in the same field of endeavor of marking an object in an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to allocate region label data of a label region to each region based on the number of labels of region label data of a label region and a plurality of predetermined threshold values as taught in the image processing device of Ito in the method of Shuai so that the computational load to integrate the label data are equalized (see Ito Abstract). Shuai in view of Ito does not expressively teach and an angle calculation unit that calculates an angle of a detected object relative to a predetermined reference in the image to be processed. However, Kobayashi in a similar invention in the same field of endeavor teaches and an angle calculation unit that calculates an angle of a detected object relative to a predetermined reference in the image to be processed (see Kobayashi, Paragraph [0054], “The calculation unit 21 is intended to calculate the position and direction of the array antenna 5 from antenna shape information consisting of the external shape state of the array antenna 5 installed at the desired location and angle, and can also calculate the angle at which the antenna 5 should be adjusted, which is necessary for the correction, when correcting the direction in which the radio wave emission surface 55 of the array antenna 5 is facing,” the calculation unit 21 is considered to be the angle calculation unit). The combination of Shuai, Ito, and Kobayashi are analogous art because they are all in the same field of endeavor of identification marking. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to calculate the position and direction of the antenna as taught in the method of Kobayashi in the method of Shuai in view of Ito to make it easy to check whether the array antenna is pointing in the expected direction, which makes construction more efficient, shortens work hours, and reduces the number of workers required (reducing costs) (see Kobayashi Paragraph [0033]). As per Claim 19, Claim 19 claims a method comprising: a processing object acquisition step of acquiring an image to be processed as claimed in Claim 10. Therefore the rejection and rationale are analogous to that made in Claim 10. Claim(s) 11-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shuai et al., CN 109377479 in view of Ito et al, US 20180068452 in view of Kobayashi JP 2021077921 in further view of Muthukumar et al, WO 2021204350. Regarding claim 11, Shuai in view of Ito in view Kobayashi further teaches the information processing apparatus according to claim 10, further comprising: an image acquisition unit that acquires the image with the one or the plurality of annotations for showing the position at which the predetermined object is shown (see Shuai, Paragraphs [0038]-[0039], “2. Image Annotation. Considering that the present invention adopts a two-stage target detection method, frame annotation and pixel-by-pixel annotation are designed respectively, and two types of annotations are performed on the antenna remote sensing image to form corresponding annotation files, as shown in Figures 3 and 4, respectively providing annotation data for the model training process required by the two-stage detection method”, and [0045], “The butterfly satellite antenna remote sensing image and the corresponding pixel-by-pixel annotation obtained by annotating the file are used to train the fully convolutional deep learning network, and the deep learning model required for the first stage detection is obtained, as shown in Figure 5,” image collection and images downloaded is considered to be an image acquisition unit; two-stage target detection is considered to be predetermined object to be detected; frame annotation and pixel-by-pixel annotation are considered to be a plurality of annotations showing a position in the image which a predetermined object is shown); an edge detection unit that detects edges in the image (see Shuai, Paragraph [0047], “The edge of the pixel-level detection result is expanded to obtain a rectangular area containing all pixels,” edge of pixel-level detection result is considered to be edge detection); and a machine learning unit that generates the learning model by performing the machine learning using teacher data including an image corrected by the annotation correction unit (see Shuai, Paragraph [0045], “The butterfly satellite antenna remote sensing image and the corresponding pixel-by-pixel annotation obtained by annotating the file are used to train the fully convolutional deep learning network, and the deep learning model required for the first stage detection is obtained, as shown in Figure 5,” the deep convolutional neural network is considered to be a machine learning unit; the annotated files are used to train the deep convolutional neural network are considered to be performing machine learning using teacher data). Shuai in view of Ito in view of Kobayashi does not expressively teach an annotation correction unit that corrects the annotations so as to be along the detected edges; However, Muthukumar in a similar invention in the same field of endeavor teaches an annotation correction unit that corrects the annotations so as to be along the detected edges (see Muthukumar , Paragraphs [0023]-[0024], “the adjusting of the position of the marking comprises adjusting the position of the marking of the object being subjected to edge detection of a masked image in the sequence with respect to a previous image in the sequence, wherein the marking is considered to be aligned with the object being subjected to edge detection when a number of overlapping pixels between the marking and the object exceeds an overlap threshold value or when a maximum number of overlapping pixels is acquired,” adjusting the position of the marking of the object being subjected to edge detection of a masked image is considered to be annotation correction so as to be along the detected edges); The combination of Shuai, Ito, Kobayashi, and Muthukumar are analogous art because they are all in the same field of endeavor of marking an object in an image. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to adjust the position of the markings of the object with respect to a previous image in the sequence and aligned based on an overlap threshold, obtain an optimal position of the bounding box, determine whether or not the bounding box is aligned with position of the object, move the bounding box until it overlaps with the contours of the object, and perform annotation of an object with a processing unit as taught in the device of Muthukumar in the method of Shuai in view of Ito in view of Kobayashi to reduce cost of visual data annotation and allows image and video analytics infrastructure to be built faster and cheaper; creating of training data creation is currently the limitation factor for these applications (see Muthukumar Paragraph [0012]). Regarding claim 12, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar teaches all of the limitations of claim 11. Shuai in view of Ito in view of Kobayashi in further view of Muthukumar teaches the information processing apparatus according to claim 11, wherein the program code further comprises edge detection code configured to cause the at least one processor to preferentially detect edges in the specified region or a range set on a basis of the region (see Shuai, Paragraph [0047], “The edge of the pixel-level detection result is expanded to obtain a rectangular area containing all pixels,” a rectangular area is considered to be a region). The rationale of claim 11 has been applied herein. Regarding claim 13, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar further teaches the information processing apparatus according to claim 12, wherein the region specification code is further configured to cause the at least one processor to specify the region in which an amount of annotations relative to area satisfies the predetermined criterion in the image (see Shuai, Paragraph [0051], “The present invention realizes the automatic detection of remote sensing antenna targets, and adopts the detection rate and the false alarm rate as the final measurement indicators, wherein the detection rate is the ratio of the number of antenna targets detected by the algorithm to the total number of targets actually contained in the image, and the false alarm rate is the ratio of the number of non-antenna targets detected by the algorithm to the total number of targets actually contained in the image,” in order to calculate a detection rate there must be a predetermined criterion to differentiate between actual antenna targets and non-antenna targets). The rationale of claim 12 has been applied herein. Regarding claim 14, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar further teaches the information processing apparatus according to claim 12, wherein the region specification code is further configured to cause the at least one processor to specify the region in which positions of thee plurality of annotations are in a predetermined relationship related (see Muthukumar , Paragraph [0083], “The optimal position for the BB in terms of alignment with the mask Y is determined as the position where the defined above error is minimized. In other words, the position of the BB where there is a maximum overlap of is with the object (and thus the mask Y), and Paragraph [0085], “It is noted that when the objects to be annotated have irregular shapes or contours, alternatives to using BB annotation may be envisaged such as e.g. polygon annotation or semantic per-pixel segmentation. [0086] If any of these alternative annotation approaches are utilized, it may be beneficial to modify the procedure of searching for maximum overlap between edge pixels of the BB and the object being subjected to edge detection as described with reference to Figures 9a-c to maximizing the overlap between pixels of image areas covered by the BB and the object being subjected to edge detection,” the BB is considered to be a plurality of annotations; the optimal position for BB (bounding box) is considered a plurality of annotations are related). The rationale of claim 12 has been applied herein. Regarding claim 15, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar further teaches the information processing apparatus according to claim 12, wherein the program code further comprises: estimation code is further configured to cause the at least one processor to estimates positions at which the plurality of annotations were intended to be made, on a basis of the detected edges, wherein the annotation correction code is further configured to cause the at least one processor to move the positions of the plurality of annotations to the estimated positions (see Muthukumar , Paragraph [0080], “Figure 9b illustrates the mask Y of Figure 6a together with the BB, which in this example has drifted and is not aligned with the mask Y (and is thus not aligned with the object represented by the mask). Hence, the BB should be moved to a position where the BB indeed is aligned with the object to be marked in the masked image ISEG. In Figure 9b, the BB consists of the pixels enclosed by dashed lines while the mask Y consists of the pixels enclosed by continuous lines,” when the BB is not aligned with the mask thus not aligned with the object represented by the mask the BB is moved to a position where the BB is aligned which is considered to be wherein the annotation correction unit moves the positions of the annotations to the positions estimated by the estimation code). The rationale of claim 12 has been applied herein. Regarding claim 16, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar further teaches the information processing apparatus according to claim 12, wherein the annotation correction code is further configured to cause the at least one processor to move the positions of the plurality of annotations to positions of edges closest to the plurality of annotations (see Muthukumar , Paragraph [0081], “Thus, as illustrated in Figure 9c, the BB is moved until it overlaps with the contours of the object as represented by the mask Y. In practice, the BB is moved in every direction, but by no more than X pixels where X can be considered to represent a search area around an initial BB position location. In the case of consecutive images in the sequence, X is a small number since the object does not move much from one image to another, unless a large instant movement of the camera is undertaken which may result in a greater number X,” the BB is moved until it overlaps with the contours of the object is considered to be moves the positions of the annotations to positions of edges closest to the annotations). The rationale of claim 12 has been applied herein. Regarding claim 17, Shuai in view of Ito in view of Kobayashi in further view of Muthukumar further teaches the information processing apparatus according to claim 12, wherein the angle calculation code is further configured to cause the at least one processor to calculate the angle of the detected object relative to any of a predetermined compass direction, a vertical direction, and a horizontal direction in the image to be processed (see Kobayashi, Paragraph [0054], “The calculation unit 21 is intended to calculate the position and direction of the array antenna 5 from antenna shape information consisting of the external shape state of the array antenna 5 installed at the desired location and angle, and can also calculate the angle at which the antenna 5 should be adjusted, which is necessary for the correction, when correcting the direction in which the radio wave emission surface 55 of the array antenna 5 is facing, ” the calculation unit 21 is considered to be the angle calculation unit, and Paragraph [0073], “a method may be used in which the image recognition unit 22 performs image recognition of the horizontal side (horizontal axis) and vertical side (vertical axis) as the outer edges of the front and back sides of housing 51 of array antenna 5, and the position of array antenna 5 and the direction in which it is pointing are calculated by calculation unit 21 from their inclination and length in space.”). The rationale of claim 12 has been applied herein. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Dec 27, 2022
Application Filed
Jun 10, 2025
Non-Final Rejection — §103
Sep 17, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month