Prosecution Insights
Last updated: April 19, 2026
Application No. 19/241,674

METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR PROCESSING INFORMATION

Non-Final OA §102
Filed
Jun 18, 2025
Examiner
KHOO, STACY
Art Unit
2624
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
486 granted / 598 resolved
+19.3% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
20 currently pending
Career history
618
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
23.9%
-16.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Castillo et al. (US 2021/0243362 A1). As to claim 1, Castillo et al. teaches a method, comprising: receiving, through a display interface, a triggering operation associated with using a target object ([0064]; Framing the physical structure within a display; [0073];[0078]: the user operates user device 110 to capture images for a set of images that will be transmitted to server 120 for 3D reconstruction, the captured images are individually uploaded to server 120;[0084];[0213];[0215]: capture image); determining whether a predetermined condition associated with the target object is satisfied ([0084];[0203]:determine whether each image captured during the image capture session satisfies a 3D reconstruction condition;[0213];[0216]: At block 5450, the native application executing on user device 110 can determine whether the first 2D image and the second 2D image satisfy a 3D reconstruction condition); in response to determining that the predetermined condition associated with the target object is satisfied, responding to the triggering operation to allow the target object to be used to process a target subject ([0078]: capture images for 3D reconstruction; [0203]: As illustrated in FIG. 51, user guidance system 270 determines that the image captured from position B satisfies the 3D reconstruction condition with respect to the image captured from position A, and accordingly, generates the feedback notification of “Image captured. Please continue”; [0217]: If the first 2D image and the second 2D image do satisfy the 3D construction condition (e.g., “Yes” branch out of block 5450), then process 5400 proceeds to block 5460. At block 5460, the native application causes the image capture session to capture and store the second 2D image); and in response to determining that the predetermined condition is not satisfied, displaying, on the display interface ([0084]: If the 3D reconstruction condition is not satisfied, then user guidance system 270 can generate a feedback notification;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction), information associated with the predetermined condition and guidance for guiding the user to perform a predetermined operation in order to make the predetermined condition satisfied ([0067]; [0084]; user guidance system determines a new location to which the user should walk to re-capture an image that does satisfy 3D reconstruction condition;[0202-0203]: image captured from position D does not satisfy the 3D reconstruction condition, accordingly, user guidance system 270 generates the feedback notification of “Image not captured. You walked too far. Please walk back 5 steps to capture the image” ;[0213]: generate guidance to user;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction. The native application detects a new location and guide the user to walk towards the new location to recapture the second 2D image). As to claim 2, Castillo et al. teaches the method of claim 1, wherein the predetermined condition is whether a current available trial number for using the target object is greater than a predetermined threshold number ([0216]: 3D reconstruction condition is condition that the number of feature matches be above a threshold value). As to claim 3, Castillo et al. teaches the method of claim 2, further comprising:configuring the current available trial number in response to determining that the target object is triggered for a first time ([0205]: feature matches presented on the visualization of house;[0215-0216]: capture first 2D image, detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques); and fetching the current available trial number in response to determining that the target object is not triggered for the first time ([0205];[0216]:detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques. 3D reconstruction condition is condition that the number of feature matches be at or above a threshold value), wherein the current available trial number is determined based on a historical operation associated with the target object ([0205];[0215-0216] detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques). As to claim 4, Castillo et al. the method of claim 2, wherein the triggering operation comprises an operation of uploading a to-be-processed image comprising the target subject, or an operation of capturing a to-be-processed image comprising the target subject ([0073]: in response to receiving the set of images capturing various angles of house 150 from user device 110, the native or web application displays a final image 170, which is a visualization of a reconstructed 3D model of house 150. Final image 170 presented on a display of user device 110; [0078]), and the method further comprises updating the current available trial number according to a predetermined adjustment number after responding to the triggering operation to allow the target object to be used to process the target subject, or after determining that the predetermined operation has been performed ([0203];[0205];[0214];[0216]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value). As to claim 5, Castillo et al. teaches the method of claim 1, wherein the target object comprises at least one effect item ([0073]: 2D image), and the method further comprises: performing effect processing on the target subject based on the at least one effect item to obtain an effect image corresponding to a to-be-processed image ([0073]: in response to receiving the set of 2D images capturing various angles of house, native or web application display a final image 170, which is a visualization of a reconstructed 3D model of house 150); and displaying the effect image on the display interface ([0073]: final image 170 presented on display). As to claim 6, Castillo et al. the method of claim 2, further comprising:determining a predetermined updating value in response to detecting a second operation on the display interface ([0203]: determine whether each image captured during the image capture session satisfies a 3D reconstruction condition;[0205];[0216]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value); and updating the current available trial number based on the predetermined updating value ([0203];[0205]; [0214];[0216-0217]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value. Detect feature matches using feature detection and feature matching techniques). As to claim 7, Castillo et al. teaches the method of claim 6, wherein the second operation is an operation corresponding to publishing or submitting a processing result of processing the target subject using the target object ([0078];[0203];[0205]: When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150). As to claim 8, Castillo et al. teaches the method of claim 2, further comprising:in response to the current available trial number reaching the predetermined threshold number, presenting a second interface on the display interface ([0203];[0205]: interface 5200 displays matched features, such as matched feature 5230. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150;[0216]), and updating the current available trial number in response to a trigger operation on a target control displayed on the second interface ([0205]: By viewing interface 5200, the user quickly understands that area 5220 is an uncovered area of house 150, and that the user needs to capture more images of area 5220 to maximize the feature correspondences associated with house 150. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150), wherein the target control is a value control for obtaining the current available trial number ([0205]: By viewing interface 5200, the user quickly understands that area 5220 is an uncovered area of house 150, and that the user needs to capture more images of area 5220 to maximize the feature correspondences associated with house 150. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150). As to claim 9, Castillo et al. teaches a device, comprising: one or more processors ([0012]: processing apparatus); and a storage device for storing one or more programs that, when executed by the one or more processors, cause the one or more processors ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) to: receive, through a display interface, a triggering operation associated with using a target object ([0064]; Framing the physical structure within a display;[0073];[0078]: the user operates user device 110 to capture images for a set of images that will be transmitted to server 120 for 3D reconstruction, the captured images are individually uploaded to server 120; [0084]; [0213]; [0215]: capture image); determine whether a predetermined condition associated with the target object is satisfied ([0084];[0203]:determine whether each image captured during the image capture session satisfies a 3D reconstruction condition;[0213];[0216]: At block 5450, the native application executing on user device 110 can determine whether the first 2D image and the second 2D image satisfy a 3D reconstruction condition); in response to determining that the predetermined condition associated with the target object is satisfied, respond to the triggering operation to allow the target object to be used to process a target subject ([0078]: capture images for 3D reconstruction; [0203]: As illustrated in FIG. 51, user guidance system 270 determines that the image captured from position B satisfies the 3D reconstruction condition with respect to the image captured from position A, and accordingly, generates the feedback notification of “Image captured. Please continue”; [0217]: If the first 2D image and the second 2D image do satisfy the 3D construction condition (e.g., “Yes” branch out of block 5450), then process 5400 proceeds to block 5460. At block 5460, the native application causes the image capture session to capture and store the second 2D image); and in response to determining that the predetermined condition is not satisfied, display, on the display interface ([0084]: If the 3D reconstruction condition is not satisfied, then user guidance system 270 can generate a feedback notification;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction), information associated with the predetermined condition and guidance for guiding the user to perform a predetermined operation in order to make the predetermined condition satisfied ([0067]; [0084]; user guidance system determines a new location to which the user should walk to re-capture an image that does satisfy 3D reconstruction condition;[0202-0203]: image captured from position D does not satisfy the 3D reconstruction condition, accordingly, user guidance system 270 generates the feedback notification of “Image not captured. You walked too far. Please walk back 5 steps to capture the image” ;[0213]: generate guidance to user;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction. The native application detects a new location and guide the user to walk towards the new location to recapture the second 2D image). As to claim 10, Castillo et al. teaches the device of claim 9, wherein the predetermined condition is whether a current available trial number for using the target object is greater than a predetermined threshold number ([0216]: 3D reconstruction condition is condition that the number of feature matches be above a threshold value). As to claim 11, Castillo et al. teaches the device of claim 10, wherein the one or more programs further cause the one or more processors ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) to: configure the current available trial number in response to determining that the target object is triggered for a first time ([0205]: feature matches presented on the visualization of house;[0215-0216]: capture first 2D image, detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques); and fetch the current available trial number in response to determining that the target object is not triggered for the first time ([0205];[0216]:detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques. 3D reconstruction condition is condition that the number of feature matches be at or above a threshold value), wherein the current available trial number is determined based on a historical operation associated with the target object ([0205];[0215-0216]: detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques). As to claim 12, Castillo et al. teaches the device of claim 10, wherein the triggering operation comprises an operation of uploading a to-be-processed image comprising the target subject, or an operation of capturing a to-be-processed image comprising the target subject ([0073]: in response to receiving the set of images capturing various angles of house 150 from user device 110, the native or web application displays a final image 170, which is a visualization of a reconstructed 3D model of house 150. Final image 170 presented on a display of user device 110; [0078]), and the one or more programs further cause the one or more processors to ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) update the current available trial number according to a predetermined adjustment number after responding to the triggering operation to allow the target object to be used to process the target subject, or after determining that the predetermined operation has been performed ([0203];[0205];[0214];[0216]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value). As to claim 13, Castillo et al. teaches the device of claim 9, wherein the target object comprises at least one effect item ([0073]: 2D image), and the one or more programs further cause the one or more processors ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) to: perform effect processing on the target subject based on the at least one effect item to obtain an effect image corresponding to a to-be-processed image ([0073]: in response to receiving the set of 2D images capturing various angles of house, native or web application display a final image 170, which is a visualization of a reconstructed 3D model of house 150); and display the effect image on the display interface ([0073]: final image 170 presented on display). As to claim 14, Castillo et al. teaches the device of claim 10, wherein the one or more programs further cause the one or more processors ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) to: determine a predetermined updating value in response to detecting a second operation on the display interface ([0203]: determine whether each image captured during the image capture session satisfies a 3D reconstruction condition;[0205];[0216]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value); and update the current available trial number based on the predetermined updating value ([0203];[0205]; [0214];[0216-0217]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value. Detect feature matches using feature detection and feature matching techniques). As to claim 15, Castillo et al. teaches the device of claim 14, wherein the second operation is an operation corresponding to publishing or submitting a processing result of processing the target subject using the target object ([0078];[0203];[0205]: When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150). As to claim 16, Castillo et al. teaches the device of claim 10, the one or more programs further cause the one or more processors ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) to:in response to the current available trial number reaching the predetermined threshold number, presenting a second interface on the display interface ([0203];[0205]: interface 5200 displays matched features, such as matched feature 5230. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150;[0216]), and updating the current available trial number in response to a trigger operation on a target control displayed on the second interface ([0205]: By viewing interface 5200, the user quickly understands that area 5220 is an uncovered area of house 150, and that the user needs to capture more images of area 5220 to maximize the feature correspondences associated with house 150. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150), wherein the target control is a value control for obtaining the current available trial number ([0205]: By viewing interface 5200, the user quickly understands that area 5220 is an uncovered area of house 150, and that the user needs to capture more images of area 5220 to maximize the feature correspondences associated with house 150. When the entirety of house 150 is covered in detected feature matches, then the image capture session has captured a sufficient amount of image data to allow 3D model reconstruction system 280 to generate a 3D model of house 150). As to claim 17, Castillo et al. teaches a non-transitory storage medium comprising computer-executable instructions that, when executed by a processor, cause the processor to perform ([0012]: non-transitory machine-readable storage medium, including instructions configured to cause a processing apparatus to perform operations) a method comprising:receiving, through a display interface, a triggering operation associated with using a target object ([0064]; Framing the physical structure within a display;[0073];[0078]: the user operates user device 110 to capture images for a set of images that will be transmitted to server 120 for 3D reconstruction, the captured images are individually uploaded to server 120; [0084]; [0213]; [0215]: capture image); determining whether a predetermined condition associated with the target object is satisfied ([0084];[0203]:determine whether each image captured during the image capture session satisfies a 3D reconstruction condition;[0213];[0216]: At block 5450, the native application executing on user device 110 can determine whether the first 2D image and the second 2D image satisfy a 3D reconstruction condition); in response to determining that the predetermined condition associated with the target object is satisfied, responding to the triggering operation to allow the target object to be used to process a target subject ([0078]: capture images for 3D reconstruction; [0203]: As illustrated in FIG. 51, user guidance system 270 determines that the image captured from position B satisfies the 3D reconstruction condition with respect to the image captured from position A, and accordingly, generates the feedback notification of “Image captured. Please continue”; [0217]: If the first 2D image and the second 2D image do satisfy the 3D construction condition (e.g., “Yes” branch out of block 5450), then process 5400 proceeds to block 5460. At block 5460, the native application causes the image capture session to capture and store the second 2D image); and in response to determining that the predetermined condition is not satisfied, displaying, on the display interface ([0084]: . If the 3D reconstruction condition is not satisfied, then user guidance system 270 can generate a feedback notification;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction), information associated with the predetermined condition and guidance for guiding the user to perform a predetermined operation in order to make the predetermined condition satisfied ([0067]; [0084]; user guidance system determines a new location to which the user should walk to re-capture an image that does satisfy 3D reconstruction condition;[0202-0203]: image captured from position D does not satisfy the 3D reconstruction condition, accordingly, user guidance system 270 generates the feedback notification of “Image not captured. You walked too far. Please walk back 5 steps to capture the image” ;[0213]: generate guidance to user;[0217]: block 5470, the native application executing on user device 110 displays a notification indicating that the first pose and the second pose are too far apart for 3D reconstruction. The native application detects a new location and guide the user to walk towards the new location to recapture the second 2D image). As to claim 18, Castillo et al. teaches the non-transitory storage medium of claim 17, wherein the predetermined condition is whether a current available trial number for using the target object is greater than a predetermined threshold number ([0216]: 3D reconstruction condition is condition that the number of feature matches be above a threshold value). As to claim 19, Castillo et al. teaches the non-transitory storage medium of claim 18, wherein the method further comprises:configuring the current available trial number in response to determining that the target object is triggered for a first time ([0205]: feature matches presented on the visualization of house;[0215-0216]: capture first 2D image, detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques); and fetching the current available trial number in response to determining that the target object is not triggered for the first time ([0205];[0216]:detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques. 3D reconstruction condition is condition that the number of feature matches be at or above a threshold value), wherein the current available trial number is determined based on a historical operation associated with the target object ([0205];[0215-0216]: detect feature matches between the first 2D image and the second 2D image using feature detection and feature matching techniques). As to claim 20, Castillo et al. teaches the non-transitory storage medium of claim 18, wherein the triggering operation comprises an operation of uploading a to-be-processed image comprising the target subject, or an operation of capturing a to-be-processed image comprising the target subject ([0073]: in response to receiving the set of images capturing various angles of house 150 from user device 110, the native or web application displays a final image 170, which is a visualization of a reconstructed 3D model of house 150. Final image 170 presented on a display of user device 110; [0078]), and the method further comprises updating the current available trial number according to a predetermined adjustment number after responding to the triggering operation to allow the target object to be used to process the target subject, or after determining that the predetermined operation has been performed ([0203]; [0205]; [0214]; [0216]: 3D reconstruction condition is a condition that the number of feature matches at or above a threshold value). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STACY KHOO whose telephone number is (571)270-3698. The examiner can normally be reached Mon-Fri 8:00 am-5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STACY KHOO/Primary Examiner, Art Unit 2624
Read full office action

Prosecution Timeline

Jun 18, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602139
DISPLAY PANEL AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12592194
DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586532
DISPLAY APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12585307
Retractable Displays for Electronic Meetings
2y 5m to grant Granted Mar 24, 2026
Patent 12586526
DISPLAY DEVICE AND MOBILE ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
96%
With Interview (+14.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month