Prosecution Insights
Last updated: April 19, 2026
Application No. 18/705,365

AREA DETECTION DEVICE, AREA DETECTION METHOD, AND PROGRAM

Non-Final OA §103§112
Filed
Apr 26, 2024
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 26 April 2024 was received and the information disclosure statement has been considered by the examiner. Response to Amendment The examiner acknowledges the preliminary amendments submitted on 26 April 2024. The preliminary amendments are entered. Claims 1-6 are amended. Claim 7 is canceled. Claims 8-9 are added. Claims 1-6 and 8-9 are pending in this action. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: "An Image Analysis Device, Method, and Program for Detecting Objects Having a Linear Band Shape" Claim Objections Claim 3 is objected to because of the following informalities: line 4-5 "wherein the line segment from the edges is detected" should read "wherein the line segment is detected from the edges" Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, applicant has claimed a system (i.e. apparatus) claim. However, applicant has not defined any specific structure for the claimed system. Applicant’s claim merely recites a series of steps performed by the apparatus, without defining any specific structure for the claimed apparatus. Therefore, since applicant has not defined any structural limitations for the claim, the scope of the claimed system is unclear. Therefore, applicant has failed to particularly point out and distinctly claim the subject matter which the inventor regards as the invention. Claims 2-6 and 9 are dependent on claim 1 and are likewise rejected for failing to correct the ambiguity of claim 1. Regarding claim 4, claim 4 is unclear because the syntax of the language causes confusion. Lines 1-2 recite "The region detection system of claim 3, further comprising noise is removed . . ." How can a system comprise "noise is removed"? Should it read, "noise that is removed"? If so, how can a system "comprise noise"? For the purpose of examination, the examiner interprets the claim as "The region detection system according to claim 3, further comprising removing noise to generate . . ." Further regarding claim 4, line 2 states "noise is removed" and line 5 states "and noise is not removed". It is unclear if the noise is removed or is not removed as it cannot be both given that these are contradictory states. The examiner interprets the claim as describe below. Further regarding claim 4, there are two instances of "an edge" in the claim see lines 2 and 4. It is unclear if each edge is a separate instance of an edge or if they are subsets of "the edges" determined in claim 3. Referring to the applicant's specification, the examiner expects that the claim is attempting to claim matter similar to that presented in paragraphs [0069]-[0070]. Therefore, for the purpose of examination, the examiner interprets claim 4 to remove as noise a first subset of edges from among the edges, the first subset of edges comprising a subset of edges from among the edges of claim 3 that are less than a predetermined edge threshold length; and that the line segment of claim 1 is selected from a second subset of edges comprising edges which are not removed as noise in the first subset of edges. Due to the number of occurrences of the word "edges", the examiner also recommends amending instances of "edges" with defining words such as "first set of edges", "second set of edges", or any other naming system that the applicant feels will clearly differentiate the uses of the words "edge" or "edges". Regarding claim 5, claim 5 recites the limitation "a main line segment" in lines 2-3. It is unclear if this is a new "a main line segment" or if it is the main line segment of claim 1 from which claim 5 depends. For the purpose of examination, the examiner interprets it as the main line segment of claim 1. Further regarding claim 5, claim 5 recites the limitation "wherein a main line segment is extracted to generate the line segment having a maximum length". However, claim 1 first detects "a line segment" in line 2 and then extracts "a main line segment that is the line segment" meeting criteria in lines 4-5. It is unclear if "the line segment" is derived from "the main line segment", as is currently claimed in claim 5, or if "the main line segment" is derived from "the line segment", as is currently claimed in claim 1. Based on the previous version of the claim, the examiner expects that the claim is attempting to claim that the main line segment is the line segment having a maximum length from among possible line segments. If this is correct, the examiner recommends amending claim 1 line 2 to read "detecting a plurality of line segments . . ." and line 4-5 to read "extracting a main line segment that is a line segment from the plurality of line segments having a . . ." Amending this way allows there to be a maximum among the plurality of line segments. As long as there is a single line segment, as currently claimed in claim 1, it would inherently be the maximum and the minimum length line segment, which the examiner does not understand as the applicant's intent with regard to this claim. The examiner recommends amending claim 5 as "The region detection system according to claim 1, wherein the main line segment is the line segment having a maximum length from among the plurality of line segments." These recommendations are dependent on the examiner's understanding of the claims and the disclosure. If these understandings are incorrect, the examiner invites the applicant to amend the claim as they see fit such that the intended function is clear and definite. Regarding claim 9, claim 9 recites the limitation "output of the corrected image" in line 1. There is insufficient antecedent basis for this limitation in the claim therefore it is unclear what output the claim is referring to. For the purpose of examination, the examiner understands the claim as claiming wherein the corrected image is output and wherein the target object region indicating the target object comprises a predetermined color or pattern marking the target object in the corrected image. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Pribble et al. (US 20220130014 A1; hereafter, Pribble) in view of Yan et al. (CN108875721A; hereafter, Yan). Regarding claim 1, Pribble discloses: A region detection system comprising: detecting a line segment from a captured image ([0019] and Fig. 1A, "As shown by reference number 104, the user device (e.g., via an application executing on the user device) identifies the object in the image. . . In some implementations, processing the image may include determining one or more elements concerning the object, such as an outline of the object, a boundary outline of the object," Determining an outline of the object or a boundary outline of the object is understood as detecting at least one line segment, see how Fig. 1A shows identifying the object lines as shown by a rectangle of straight lines) including an image of a target object having a linear band shape ([0017] and Fig. 1A "As shown in FIG. 1A, and by reference number 102, the user device captures an image including image data associated with an object."); generating, as a corrected image, a rotated image obtained by rotating the captured image such that the main line segment is perpendicular or parallel to an arrangement direction of pixels in the captured image ([0018] and Fig. 1A, the angle of orientation is based on the reference axes. As shown, the angle of orientation is off of an x axis and/or a y axis which are understood as a direction of pixels in the captured image. [0022] and Fig. 1B, "rotates the image based on the angle of orientation. . . In some implementations, the user device may rotate the image by an angle of rotation that is equal to the angle of orientation, but in the opposite direction." When the image is rotated in an angle opposite but equal to the angle of orientation it is understood to align the object with the reference axes, i.e. "parallel to an arrangement direction of pixels"); and detecting a target object region indicating the image of the target object from the corrected image ([0037] and Fig. 1D, "In some implementations, the action comprises a pre-processing operation, such as cropping the object out of the rotated image." Cropping an object is understood to detect the object and indicate the image of the target object. See how in Fig. 1D the rotated image, i.e. corrected image, has the object cropped from it. The cropped object may be understood as "a target object region indicating the image of the target object" as the cropped region becomes an image of the target object). Pribble does not disclose expressly to extract a main line segment that is the line segment having a length equal to or more than a predetermined value. Yan discloses: extracting a main line segment that is the line segment having a length equal to or more than a predetermined value ([0046] step 210 detects lines and selects lines as object for investigation if the lines exceed a length threshold which is understood as lengths more than a predetermined value; [0047] the lines selected as "object of investigation" are understood as "a main line segment" as they are used in determining rotation and rotating, i.e. correcting, the image so that the line is horizontal); Pribble and Yan are combinable because they are in the same field of endeavor of normalizing an image by rotation for improved image analysis (Pribble, [0013]-[0014]; Yan, [0006] and [0009]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the main line segment extraction and image correction of Yan with the invention of Pribble. The motivation for doing so would have been "to obtain a text image with consistent size, brightness, and orientation" (Yan, [0045]). Therefore, it would have been obvious to combine Yan with Pribble to obtain the invention as specified in claim 1. Regarding claim 2, Pribble in view of Yan discloses the subject matter of claim 1. Pribble further discloses: The region detection system according to claim 1, wherein a corrected image is obtained by reducing the rotated image such that an entire region of the captured image includes the entire rotated image ([0023] and Fig. 1B, "In some implementations, rotating the image based on the angle of orientation may include changing the one or more dimensions of the rotated image so that the rotated image includes image pixels (e.g., pixels associated with the image before the user device rotates the image) and padding pixels (e.g., pixels that are added to the rotated image after the user device rotates the image)." The addition of the padding pixels and "changing one or more dimensions of the rotated image" is understood as reducing the image to include the entire rotated image, see Fig. 1B). Regarding claim 3, Pribble in view of Yan discloses the subject matter of claim 1. Pribble does not disclose expressly detecting edges wherein the sine segment is detected from the edges. Yan discloses: The region detection system according to claim 1, further comprising detecting edges from the captured image ([0046] lines in the image are detected. By the broadest reasonable interpretation, detecting lines may be understood as detecting edges in the image), wherein the line segment from the edges is detected ([0046] the "objects of investigation", i.e. line segment, is selected from the detected lines, i.e. edges). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the line segment detection of Yan with the invention of Pribble. The motivation for doing so would have been "to obtain a text image with consistent size, brightness, and orientation" (Yan, [0045]). Therefore, it would have been obvious to combine Yan with Pribble to obtain the invention as specified in claim 3. Regarding claim 6, Pribble discloses: A region detection method comprising: detecting a line segment from a captured image ([0019] and Fig. 1A, "As shown by reference number 104, the user device (e.g., via an application executing on the user device) identifies the object in the image. . . In some implementations, processing the image may include determining one or more elements concerning the object, such as an outline of the object, a boundary outline of the object," Determining an outline of the object or a boundary outline of the object is understood as detecting at least one line segment, see how Fig. 1A shows identifying the object lines as shown by a rectangle of straight lines) including an image of a target object having a linear band shape ([0017] and Fig. 1A "As shown in FIG. 1A, and by reference number 102, the user device captures an image including image data associated with an object."); generating, as a corrected image, a rotated image obtained by rotating the captured image such that the main line segment is perpendicular or parallel to an arrangement direction of pixels in the captured image ([0018] and Fig. 1A, the angle of orientation is based on the reference axes. As shown, the angle of orientation is off of an x axis and/or a y axis which are understood as a direction of pixels in the captured image. [0022] and Fig. 1B, "rotates the image based on the angle of orientation. . . In some implementations, the user device may rotate the image by an angle of rotation that is equal to the angle of orientation, but in the opposite direction." When the image is rotated in an angle opposite but equal to the angle of orientation it is understood to align the object with the reference axes, i.e. "parallel to an arrangement direction of pixels"); and detecting a target object region indicating the image of the target object from the corrected image ([0037] and Fig. 1D, "In some implementations, the action comprises a pre-processing operation, such as cropping the object out of the rotated image." Cropping an object is understood to detect the object and indicate the image of the target object. See how in Fig. 1D the rotated image, i.e. corrected image, has the object cropped from it. The cropped object may be understood as "a target object region indicating the image of the target object" as the cropped region becomes an image of the target object). Pribble does not disclose expressly to extract a main line segment that is the line segment having a length equal to or more than a predetermined value. Yan discloses: extracting a main line segment that is the line segment having a length equal to or more than a predetermined value ([0046] step 210 detects lines and selects lines as object for investigation if the lines exceed a length threshold which is understood as lengths more than a predetermined value; [0047] the lines selected as "object of investigation" are understood as "a main line segment" as they are used in determining rotation and rotating, i.e. correcting, the image so that the line is horizontal); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the main line segment extraction and image correction of Yan with the invention of Pribble. The motivation for doing so would have been "to obtain a text image with consistent size, brightness, and orientation" (Yan, [0045]). Therefore, it would have been obvious to combine Yan with Pribble to obtain the invention as specified in claim 6. Regarding claim 8, Pribble discloses: A computer-readable non-transitory recording medium storing computer executable program instructions ([0049] a static storage device 330, i.e. non-transitory computer-readable medium, stores instructions) that when executed by a processor cause a computer to execute a program generation method ([0049] processor 320 executes the stored instructions to perform the method) comprising: detecting a line segment from a captured image ([0019] and Fig. 1A, "As shown by reference number 104, the user device (e.g., via an application executing on the user device) identifies the object in the image. . . In some implementations, processing the image may include determining one or more elements concerning the object, such as an outline of the object, a boundary outline of the object," Determining an outline of the object or a boundary outline of the object is understood as detecting at least one line segment, see how Fig. 1A shows identifying the object lines as shown by a rectangle of straight lines) including an image of a target object having a linear band shape ([0017] and Fig. 1A "As shown in FIG. 1A, and by reference number 102, the user device captures an image including image data associated with an object."); generating, as a corrected image, a rotated image obtained by rotating the captured image such that the main line segment is perpendicular or parallel to an arrangement direction of pixels in the captured image ([0018] and Fig. 1A, the angle of orientation is based on the reference axes. As shown, the angle of orientation is off of an x axis and/or a y axis which are understood as a direction of pixels in the captured image. [0022] and Fig. 1B, "rotates the image based on the angle of orientation. . . In some implementations, the user device may rotate the image by an angle of rotation that is equal to the angle of orientation, but in the opposite direction." When the image is rotated in an angle opposite but equal to the angle of orientation it is understood to align the object with the reference axes, i.e. "parallel to an arrangement direction of pixels"); and detecting a target object region indicating the image of the target object from the corrected image ([0037] and Fig. 1D, "In some implementations, the action comprises a pre-processing operation, such as cropping the object out of the rotated image." Cropping an object is understood to detect the object and indicate the image of the target object. See how in Fig. 1D the rotated image, i.e. corrected image, has the object cropped from it. The cropped object may be understood as "a target object region indicating the image of the target object" as the cropped region becomes an image of the target object). Pribble does not disclose expressly to extract a main line segment that is the line segment having a length equal to or more than a predetermined value. Yan discloses: extracting a main line segment that is the line segment having a length equal to or more than a predetermined value ([0046] step 210 detects lines and selects lines as object for investigation if the lines exceed a length threshold which is understood as lengths more than a predetermined value; [0047] the lines selected as "object of investigation" are understood as "a main line segment" as they are used in determining rotation and rotating, i.e. correcting, the image so that the line is horizontal); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the main line segment extraction and image correction of Yan with the invention of Pribble. The motivation for doing so would have been "to obtain a text image with consistent size, brightness, and orientation" (Yan, [0045]). Therefore, it would have been obvious to combine Yan with Pribble to obtain the invention as specified in claim 8. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Pribble et al. (US 20220130014 A1; hereafter, Pribble) in view of Yan et al. (CN108875721A; hereafter, Yan) in further view of Kido (US 20100098339 A1). Regarding claim 4, Pribble in view of Yan discloses the subject matter of claim 3. Pribble in view of Yan does not disclose expressly to remove noise as edges less than an edge threshold value and wherein the line segment is detected from an edge that was not removed as noise. Kido discloses: The region detection system according to claim 3, further comprising noise is removed to generate an edge less than a predetermined edge threshold value among the edges ([0420] "elimination of the noise component is realized by setting a threshold and eliminating a line segment with a length not larger than the predetermined length."), wherein the line segment is detected from an edge detected and the noise is not removed ([0420] "a function capable of selecting the contour in ascending order of length from the shorter contour is set, so as to obtain an appropriate search result in accordance with the registered image." The selected contour is understood as the line segment). Kido is combinable with Pribble in view of Yan because it is from the related field of endeavor of extracting contour or edge information and performing image processing based on the information (Kido, [0003]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the noise removal of Kido with the invention of Pribble in view of Yan. The motivation for doing so would have been "the noise component is effectively eliminated, a highly reliable search result can be obtained" (Kido, [0420]). Therefore it would have been obvious to combine Kido with Pribble in view of Yan to obtain the invention as specified in claim 4. Claims 5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Pribble et al. (US 20220130014 A1; hereafter, Pribble) in view of Yan et al. (CN108875721A; hereafter, Yan) in further view of Ouyang et al. (CN 110866928 A; hereafter, Ouyang). Regarding claim 5, Pribble in view of Yan discloses the subject matter of claim 5. Pribble in view of Yan does not disclose expressly that the main line segment is the line segment having a maximum length. Ouyang discloses: The region detection system according to claim 1, wherein a main line segment is extracted to generate the line segment having a maximum length ([0080] the longest line segment, i.e. having the maximum length, is detected and used to determine the angle of rotation of the object. Because the longest line segment is used to determine the angle of rotation it is understood as the main line segment). Ouyang is combinable with Pribble in view of Yan because it is from the same field of endeavor of correcting an image by rotating the image based on an angled line (Ouyang, [0081]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to the maximum length segment selection of Ouyang with the invention of Pribble in view of Yan. The motivation for doing so would have been that obtaining a horizontal view of the object based on the longest line allows for "the precise position of the bounding rectangle of the target A to be detected is obtained, as shown in Figure 14" (Outang, [0086]) and accurate boundary detection (see Ouyang, [0087] and Fig. 15). Therefore it would have been obvious to combine Ouyang with Pribble in view of Yan to obtain the invention as specified in claim 5. Regarding claim 9, Pribble in view of Yan discloses the subject matter of claim 1. Pribble in view of Yan does not disclose expressly and is not relied upon to disclose that the output of the corrected image comprises marking the object by a predetermined color or pattern. Ouyang discloses: The region detection system of claim 1, wherein output of the corrected image comprising object area indicated by predetermined color or pattern ([0082]-[0086] and Fig. 14, an output is generated as shown in Fig. 14 of a bounding box around the object in the image. The bounding box is understood as indicating the object using a predetermined pattern. As this limitation is in alternative form, i.e. "color or pattern", it is taught by at least one option being met). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the object indication of Ouyang with the invention of Pribble in view of Yan. The motivation for doing so would have been "when the background target and the target to be detected are closely connected, even if the bounding box of this application contains a large amount of background noise, this application can still clearly mark the complete and accurate boundary of the target to be detected" (Ouyang, [0094]). Therefore, it would have been obvious to combine Ouyang with Pribble in view of Yan to obtain the invention as specified in claim 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Davis et al., US 20130223673 A1, discloses a system which detects objects and corrects their orientation for analysis. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Mar 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month