Prosecution Insights
Last updated: April 19, 2026
Application No. 18/537,749

CONTROL METHOD FOR DETECTION SYSTEM

Final Rejection §103
Filed
Dec 12, 2023
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Innocare Optoelectronics Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/12/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4-7, 12 and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1). Regarding Claim 1, Kim teaches a control method for a detection system, and the detection system comprising a detection device([0056] FIGS. 22A to 24 are flowcharts illustrating control methods of an X-ray imaging apparatus when operation of detecting X-rays is performed), wherein the detection device comprises a plurality of scanning lines([0127] “Hereinafter, the X-ray detector 120 that divides scanning lines based on the first and second reference lines as shown in FIG. 10 to divide a region of uninterest from a region of interest and performs 2×2 binning scanning will be described as an example”, ¶[0127] discloses that scanning lines are divided to determine a region of interest from the x-ray scan.), the control method comprising: generating, by the detection device, first image data(¶[0061], “Referring to FIG. 1, an X-ray imaging apparatus 100 according to an embodiment of the present disclosure may include an X-ray source 110 to generate and irradiate X-rays, an X-ray detector 120 to detect X-rays transmitted through an object, and an image processor 130 to produce an X-ray image about the inside of the object using the detected X-rays.”, ¶[0061] discloses using an image processor to produce a first x-ray image), and at least a part in the first image data corresponding to a key area (¶[0085] “The binning scanning is performing pixel binning on a predetermined region of an X-ray image to acquire a low-resolution X-ray image”, ¶0085] discloses perform pixel binning to determine a region of interest in the x-ray image and Fig. 10 shows using fast scanning to determine a region of interest in the x-ray image and slow scanning to determine regions of that aren’t relevant.); controlling, by the detection device, a first part corresponding to the key area in the plurality of scanning lines in a first scanning manner (¶[0102], “Scanning may be performed from the first to m-th rows, sequentially, or from the m-th to first rows, in reverse order. In the following description, a pixel region of rows on which slow scanning is performed is referred to as a region of uninterest (or no-interest), and a pixel region of rows on which fast scanning is performed is referred to as a region of interest”, ¶[0102] discloses determining a region of interest by performing fast scanning and ¶[0103] “the first reference line and the second reference line, which are reference rows or reference gate lines GL that divide a region to be subject to fast scanning from a region to be subject to slow scanning, may have been set in advance by a user or a manufacturer”, discloses a user may set the scanning lines when performing a fast scan to slow scan, ); controlling, by the detection device, a second part corresponding to an area other than the key area in the plurality of scanning lines in a second scanning manner(Figure 10, shows a slow scan is performed to generate an area that is a region of uninterest and performing a fast scan which pertains to the region of interest in the image.); Kim does not explicitly teach generating, by the detection device, second image data, wherein a scanning frequency of the first scanning manner is lower than a scanning frequency of the second scanning manner. Lee teaches generating, by the detection device, second image data, wherein a scanning frequency of the first scanning manner is lower than a scanning frequency of the second scanning manner. (¶[0023] ”a display device may be include a display panel including a pixel connected to a first scan line, a second scan line, and a data line, the pixel including a first type switching element connected to the first scan line, a second type switching element connected to the second scan line, and a light emitting element; a low-frequency driving controller configured to output a first power control signal in a first operation mode and a second power control signal in a second operation mode, wherein in the second operation mode, an image is displayed at a frequency lower than a reference frequency”, as disclosed in ¶[0023], generated image data is displayed at a scanning frequency lower than a reference frequency.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim with Lee in order to generate a second image where the first scanning frequency is lower than the second scanning frequency. One skilled in the art would have been motivated to modify Kim in this manner in order a data driver to operate at a frequency lower than a reference frequency in a second mode. (Lee, Abstract) Regarding Claim 4, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the key area is determined according to an external control signal. (¶[0084], “The region of interest may be automatically set by the X-ray detector 120, based on a gate line GL from which X-rays have been detected, or the region of interest may have been set in advance manually by a user or a manufacturer. Herein, the user is a person who diagnoses an object using the X-ray imaging apparatus 100, and may be medical personnel including a doctor, a radiological technologist, and a nurse. However, the user may be anyone who uses the X-ray imaging apparatus 100. A method of setting reference lines automatically or manually will be described later”, ¶[0084] discloses a user is able to set a region of interest to detect in the image) Regarding Claim 5, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the key area is determined according to data processing of the first image data. (¶[0105] discloses fast scanning is performed on the image data to determine pixels that pertain to the region of interest also shown in figure 10.) Regarding Claim 6, the combination of Kim and Lee teach the control method according to claim 5, where Kim further teaches wherein the data processing comprises an image comparison program. (¶[0166] “applying a spatial domain method of analyzing correlation between a low-resolution image and a high-resolution image in a spatial domain to reconstruct a high-resolution image using a low-resolution image, or a frequency domain method of analyzing correlation between a low-resolution image and a high-resolution image in a frequency domain to reconstruct a high-resolution image using a low-resolution image, pixel information is changed to be suitable for a high-resolution (HR) grid. Then, restoration, such as De-noising or De-blurring, may be applied.”, ¶[0166] discloses using an image processor to determine a correlation between a low-resolution image and high resolution in order to reconstruct the high resolution image.) Regarding Claim 7, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches the detection system further comprising a computing device, coupled to the detection device ( [0131] “The image processor 130 (see FIG. 1) may reconstruct a high-resolution X-ray image using a plurality of low-resolution X-ray images acquired by the X-ray detector 120.”, ¶[0131[ discloses the image processor receives data from the x-ray detector to generate a high resolution x-ray image and Fig 1 also shows that the x-ray detector is coupled to the image processor) wherein an image compensation program is performed on the second image data by the computing device to generate output image data. ([0160] “The image processor 130 may reconstruct a high-resolution X-ray image using a plurality of low-resolution X-ray images acquired by the X-ray detector 120. In order to reconstruct a high-resolution X-ray image, the image processor 130 may use a super resolution image reconstruction method.” [0161] “The super resolution image reconstruction method is also called a high resolution image reconstruction method. However, the image processor 130 may use any other method for reconstructing a high-resolution X-ray image using a plurality of low-resolution X-ray images.”,¶[0160]-¶[0161] discloses a method of performing image reconstruction by using a plurality of low-resolution images to generate a final high resolution image.) Regarding Claim 12, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches further comprising: determining that the at least a part in the first image data corresponds to the key area according to an external control signal. (¶[0084], “The region of interest may be automatically set by the X-ray detector 120, based on a gate line GL from which X-rays have been detected, or the region of interest may have been set in advance manually by a user or a manufacturer. Herein, the user is a person who diagnoses an object using the X-ray imaging apparatus 100, and may be medical personnel including a doctor, a radiological technologist, and a nurse. However, the user may be anyone who uses the X-ray imaging apparatus 100. A method of setting reference lines automatically or manually will be described later”, ¶[0084] discloses a user is able to set a region of interest to detect in the image) Regarding Claim 16, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the first scanning manner is to drive a plurality of scanning lines corresponding to the key area in a normal line-by-line driving manner.( [0079] “That is, the X-ray detector 120 detects X-rays and acquires actual X-ray image signals by scanning the m gate lines GL.” ¶[0079] discloses x-rays are acquired by scanning gate lines and as shown in figure 11, the m gate lines are scanned line by line.) Regarding Claim 17, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the second scanning manner is to simultaneously drive a plurality of scanning lines corresponding to an area other than the key area in an image binning manner. (¶[0024] “if an X-ray signal is detected from a gate line of the plurality of gate lines, the gate driver may synchronize the turn-on time of the turn-on signal according to a predetermined binning pattern.”¶[0101], “the gate driver 122 of the X-ray detector 120 according to an embodiment of the present disclosure may apply a turn-on signal for driving each transistor 121c, to the first to m-th rows, sequentially, in order to detect X-rays from an X-ray detection region. If X-rays are detected from a certain row (for example, the m1-th row), the gate driver 122 may reduce a turn-on time period of the turn-on signal from the lower row (that is, the (m1+1)-th row). Also, the gate driver 122 of the X-ray detector 120 may apply a turn-on signal for driving each transistor 121c, to the m-th to first rows, in reverse order, in order to detect X-rays from the X-ray detection region.” as disclosed in ¶[0024] the gate driver may synchronize the turn-on time of the turn-on signal by using a binning pattern and ¶[0101] discloses the process of driving the gate driver when scanning and lastly figure 11 shows line by line scanning to determine the key area or a region of uninterest.) Regarding Claim 18, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the detection device is a flat panel detector. (Fig.1 shows an x-ray imaging apparatus with an x-ray source and x-ray detector it is well known that most x-ray imaging apparatus are flat panel detectors) Regarding Claim 19, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches wherein the first image data and the second image data are X-ray image data. (¶0098] “The X-ray imaging apparatus 100 can acquire real-time X-ray moving images about the object”, ¶[0098] discloses the image data acquired are x-ray moving images.) Claims 2-3, 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1) in view of Tang et al. US PG-Pub(US 20210407103 A1). Regarding Claim 2, while the combination of Kim and Lee teach the control method according to claim 1, but they do not explicitly teach wherein the key area comprises a moving object. Tang teaches wherein the key area comprises a moving object. (¶[0063], “As shown in FIG. 2, the server 103 performs object detection and key point detection on the frames for detection, and performs a tracking operation on the frames for tracking, that is, extracts motion vectors of each of two adjacent image frames from between the two adjacent image frames, and collects statistics of the motion vectors by regions to locate positions of an object in the image frames, and further determines a movement direction of the tracked object in a real scenario based on the located positions, thereby achieving the tracking of the object.”, ¶[0063] discloses determining the region in which an object is moving in the image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Tang in order to determine a moving object in the key area. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to perform object tracking to ensure tracking accuracy while improving the tracking speed. (Tang, ¶[0006]) Regarding Claim 3, the combination of Kim, Lee and Tang teach the control method according to claim 2, where Tang further teaches wherein the key area is determined according to data processing of the first image data, and the data processing comprises determining the key area by a minimum rectangular range surrounding the moving object. (¶[0114], “ In this embodiment of the present disclosure, because the current image frame is a frame for tracking, in a process of processing the previous image frame, positions of the regions of the target object in the current image frame are estimated based on the motion vectors of the previous image frame, and a bounding rectangle is obtained, based on the estimated positions of the regions, as the position of the target object in the current image frame”, ¶[0114] discloses placing a bounding rectangle around the object of interest in the frame and ¶[0046] discloses “The size of the bounding box surrounding the object can reflect the size of the object”, the bounding box appears to be the size of the object in the frame as seen in figure 6 a bounding box covers just the object moving.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Tang in order to place a bounding box around the object to track motion. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to perform object tracking to ensure tracking accuracy while improving the tracking speed. (Tang, ¶[0006]) Regarding Claim 11, while the combination of Kim and Lee teach the control method according to claim 1, further comprising: executing a trained neural network module to determine that the at least a part in the first image data corresponds to the key area. Tang teaches further comprising: executing a trained neural network module to determine that the at least a part in the first image data corresponds to the key area.(¶[0088] discloses using a CNN to perform the object detection in the image and ¶[0099]-¶[0100] disclose determining key points that pertain to an object in a region in the image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Tang in order to use a neural network to determine key points pertaining to a object region. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to ensure tracking accuracy while improving the tracking speed. (Tang, ¶[0007]) Regarding Claim 20, while the combination of Kim and Lee the control method according to claim 1, they do not explicitly teach wherein a part corresponding to the key area in the first image data comprises moving object image data Tang teaches wherein a part corresponding to the key area in the first image data comprises moving object image data (¶[0063], “As shown in FIG. 2, the server 103 performs object detection and key point detection on the frames for detection, and performs a tracking operation on the frames for tracking, that is, extracts motion vectors of each of two adjacent image frames from between the two adjacent image frames, and collects statistics of the motion vectors by regions to locate positions of an object in the image frames, and further determines a movement direction of the tracked object in a real scenario based on the located positions, thereby achieving the tracking of the object.”, ¶[0063] discloses determining the region in which an object is moving in the image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Tang in order to determine a moving object in the key area. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to perform object tracking to ensure tracking accuracy while improving the tracking speed. (Tang, ¶[0006]) Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1) in view of Fang et al. US PG-Pub(US 20200018944 A1). Regarding Claim 8, while the combination of Kim and Lee teach the control method according to claim 7, they do not explicitly teach wherein the image compensation program is to compensate a part corresponding to the area other than the key area in the second image data. Fang teaches wherein the image compensation program is to compensate a part corresponding to the area other than the key area in the second image data. (¶[0069], “In some embodiments, image enhancer 380 may be configured to focus the enhancement on a few features, such as edges and the like. In such embodiments, rather than attempting to enhance the image quality of the entire low-resolution inspection image 330, image enhancer 380 may be configured to only enhance the image quality of certain areas of interest, which may further improve its accuracy and throughput.”, as disclosed in ¶[0069] the prior art performs image enhancements on certain areas in the image data rather than the whole image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Fang in order to enhance the resolution in certain areas of the image. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to only enhance the image quality of certain areas of interest, which may further improve its accuracy and throughput. (Fang, ¶[0025]) Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1) in view of Kubota et al. US PG-Pub(US 20240430429 A1). Regarding Claim 9, while the combination of Kim and Lee teach the control method according to claim 7, they do not explicitly teach wherein a data volume of the second image data is smaller than a data volume of the output image data. Kubota teaches wherein a data volume of the second image data is smaller than a data volume of the output image data. ([0206] “As described above, by setting the maximum quantization value in the non-effective area (or setting the non-effective area as the invalid image), according to the image processing system 100 according to the second embodiment, [0207] re-encoded data having data volume smaller than that of first encoded data and second encoded data is stored” , ¶[0206] as disclosed in this section of the prior art, the re-encoded data has a smaller data volume than the original output encoded data.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Kubota in order to have a smaller data volume in the second image data compared to the original output. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order for reduction in the transmission data volume between the hierarchical encoding device and the server device can be maintained. (Kubota, ¶[0084]) Regarding Claim 10, while the combination of Kim and Lee teach the control method according to claim 1, they do not explicitly teach wherein a data volume of the first image data is larger than a data volume of the second image data. Kubota teaches wherein a data volume of the first image data is larger than a data volume of the second image data. (¶[0206] discloses that the data volume of the re-encoded data is smaller than the first encoded data which means the first image data would inherently be larger than the re-encoded/second image data.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Kubota in order to have the first data volume to be larger in size than the second data volume. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order for reduction in the transmission data volume between the hierarchical encoding device and the server device can be maintained. (Kubota, ¶[0084]) Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1) in view of Muraoka US Patent(US 11962920 B2). Regarding Claim 13, the combination of Kim and Lee teach the control method according to claim 1, where Kim further teaches further comprising: generating, by the detection device, another first image data([0164], ”As shown in FIG. 21, in order to create a high-resolution image, registration and reconstruction may be performed.[0165] The registration is used to obtain geometrical alignment correlation between low-resolution images. For example, if the X-ray detector 120 has acquired four low-resolution X-ray images having different binning patterns shifted by 1 pixel in the horizontal and vertical directions, registration may be applied to obtain alignment correlation as shown in FIG. 21.”, ¶[0164]-¶[0165] discloses acquiring 4 low resolution x-ray images in order to perform high resolution image reconstruction and registration); However, Kim and Lee do not explicitly teach comparing data difference between the first image data and the another first image data to determine that the at least a part in the first image data corresponds to the key area. Muraoka teaches comparing data difference between the first image data and the another first image data to determine that the at least a part in the first image data corresponds to the key area. (Col 17, Lines 11-22, “The comparison section 183 compares the past image (FIG. 13A) held in the image memory 182 and a current image (FIG. 13B) to take a difference absolute value between the same pixels, and thereby obtains a difference absolute value image (FIG. 13C). Although there is no difference between the past image and the current image in a region having no motion, there is a difference therebetween in a region having motion, and the difference is extracted. The current image information is stored in the image memory 182 for comparison operation in a next imaging frame”, as disclosed in this section of the prior art, a data comparison is performed between two images to determine pixel differences in the case of motion in a region.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Muraoka in order to compare data differences between two images to determine if the region of interest has changed. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order to read out signals of the pixels in a specific region ROI (Region of Interest: region of interest) in the pixel array section at a second frame rate higher than the first frame rate. (Muraoka, Col 6, Lines 24-27) Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US PG-Pub(US 20160061966 A1) in view of Lee et al. US PG-Pub(US 20200219450 A1) in view of Kim 2 et al. US PG-Pub(US 20230132809 A1). Regarding Claim 14, while the combination of Kim and Lee teach the control method according to claim 1, they do not explicitly teach further comprising: defining a pixel array to correspond to range of a high resolution area and a low resolution area according to the key area. Kim 2 teaches further comprising: defining a pixel array to correspond to range of a high resolution area and a low resolution area according to the key area. (¶[0051], “when the pixel array 13 constituting the image sensing region is divided into first to n.sup.th regions (n is an integer greater than or equal to 3), the pixel array 13 may be provided to obtain an image of different resolutions in the first to n.sup.th regions. That is, the pixel array 13 may be obtain a high-resolution image in the region of interest, for example, the first region, and a relatively low-resolution image in the non-interested region, for example, the n.sup.th region, etc.”, as disclosed in figure 1 and in ¶[0051], the prior art is able to define a pixel array that high resolution areas pertain to the region of interest and low resolution areas pertain to non-interested regions.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Kim 2 in order to define the pixel array have a high resolution range for areas of interest and low resolution range of non-interested areas. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order for the resolution of the region of interest may be realized with a high resolution, and the total amount of pixel information may be reduced, thereby improving the frame rate. (Kim 2, ¶[0047]) Regarding Claim 15, the combination of Kim, Lee and Kim 2 teach the control method according to claim 14, where Kim 2 further teaches wherein the key area corresponds to the high resolution area of the pixel array, and an area other than the high resolution area may be a low resolution area. (¶[0051] discloses that the region of interest pertains to a high resolution region in the pixel area while low resolution areas are non-interested regions.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Lee with Kim 2 in order for the pixel area to have high resolution areas for areas of interest and low-resolution areas of non-interested areas. One skilled in the art would have been motivated to modify Kim and Lee in this manner in order for the resolution of the region of interest may be realized with a high resolution, and the total amount of pixel information may be reduced, thereby improving the frame rate. (Kim 2, ¶[0047]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Nov 29, 2025
Non-Final Rejection — §103
Feb 26, 2026
Response Filed
Apr 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month