Prosecution Insights
Last updated: April 19, 2026
Application No. 18/384,318

PROCESSING METHOD AND APPARATUS THEREOF

Non-Final OA §102
Filed
Oct 26, 2023
Examiner
TILLERY, RASHAWN N
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Lenovo (Beijing) Limited
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+9.5% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
32 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication is responsive to the application filed 10/26/2023. 2. Claims 1-20 are pending in this application. Claims 1, 10 and 19 are independent claims. This action is made Non-Final. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ozog (US 10,027,727). Regarding claim 1, Ozog discloses a processing method, comprising: in response to obtaining a first operation (see fig 31A; e.g., user selection of identified face in interface 7108) performed on a first display region of an electronic device (see fig 31A; e.g., interface 7108), generating a target processing logic (see col. 40, line 23 to col. 42, line 14; e.g., “one or more options 7110 relating to the face may be displayed”) based on the first operation and an initial processing logic (see fig 31A; also see col. 40, line 23 to col. 42, line 14; e.g., “after one or more face boundaries have been identified, a face may be selected and one or more options 7110 relating to the face may be displayed”), wherein the first display region displays at least one stream of multimedia data obtained by the electronic device (see fig 31A; e.g., face stream interface displaying photo containing a plurality of faces), the first operation is configured to determine target multimedia data (e.g., face identified by boundaries) from the at least one stream of the multimedia data (see col. 40, line 23 to col. 42, line 14; e.g., “after one or more face boundaries have been identified, a face may be selected and one or more options 7110 relating to the face may be displayed”), and the initial processing logic is a processing logic executed by the electronic device to process current multimedia data before obtaining the first operation (see fig 31A; also see col. 40, line 23 to col. 42, line 14; e.g., “one or more faces may be detected based on facial recognition”); processing the target multimedia data based on the target processing logic (see fig 31A; also see col. 40, line 23 to col. 42, line 14; e.g., “the one or more options may include an ability to tag, ability to share media with the selected face, ability to request media with the selected face, and/or select any other option relating to a selected face”); and outputting target multimedia content obtained by processing the target multimedia data (see fig 31A; e.g., share media with selected face). Regarding claim 2, Ozog discloses in response to obtaining a second operation performed on a second display region of the electronic device, updating at least one processing parameter in the target processing logic based on the second operation, wherein the second display region is an interface region for configuring an output parameter of target multimedia content for output (see fig 31B, interface 7110; e.g., user selects one of the options displayed in interface 7110). Regarding claim 3, Ozog discloses wherein updating the at least one processing parameter in the target processing logic based on the second operation includes one of: determining a configuration option of the second operation, and updating a corresponding processing parameter in the target processing logic based on a configuration parameter corresponding to the configuration option (see figs 23 and 31D; e.g., selection of “share media” option allows user to adjust/update share preferences); and determining operation information of the second operation, generating a corresponding configuration parameter based on the operation information, and updating a corresponding processing parameter in the target processing logic based on the corresponding configuration parameter (see fig 31D; e.g., selection of “share media” option allows user to adjust/update share preferences). Regarding claim 4, Ozog discloses in response to obtaining multiple streams of multimedia data from a target input component, displaying at least one stream of the multimedia data in the first display region of the electronic device, wherein the target input component is a component for collecting or forwarding the at least one stream of the multimedia data (see col. 37, line 65 to col. 38, line 15; e.g., select at least one stream from a number of streams); and identifying objects in the at least one stream of the multimedia data and adding label information to the objects in the first display region, wherein different types of objects are obtained based on different identification models, and the label information is configured to label coordinate information and/or attribute information of the objects (see col. 42, line 38 to col. 43, line 4; e.g., tag faces identified by facial recognition). Regarding claim 5, Ozog discloses determining the target multimedia data from the at least one stream of the multimedia data displayed in the first display region based on the first operation, which correspondingly includes one of: determining a target object from the at least one stream of the multimedia data based on the first operation, and determining multimedia data containing the target object as the target multimedia data (see col. 40, line 23 to col. 42, line 14; e.g., “after one or more face boundaries have been identified, a face may be selected and one or more options 7110 relating to the face may be displayed”); determining the target object from the at least one stream of the multimedia data based on the first operation, and cropping corresponding multimedia data with corresponding first cropping parameter based on label information of the target object to obtain the target multimedia data; determining the target object from the at least one stream of the multimedia data based on the first operation, and cropping corresponding multimedia data with corresponding second cropping parameter based on label information and output layout information of the target object to obtain the target multimedia data; and based on that the first operation is a circle-selection operation of circle-selecting the target object from the at least one stream of the multimedia data, determining multimedia data circle-selected by the circle-selection operation as the target multimedia data. Regarding claim 6, Ozog discloses wherein determining the target object from the at least one stream of the multimedia data based on the first operation includes one of: in response to the first operation is a drag operation of dragging a first object labeled in the first display region from a first position to a second position, determining the first object to be the target object, wherein the second position is outside the first display region; in response to the first operation is a circle-selection operation performed on the first display region, determining an object circle-selected by the circle-selection operation as the target object; in response to the first operation is an instruction input operation performed on the electronic device or the first display region of the electronic device, determining an object or region content matching an instruction inputted by the instruction input operation as the target object (see col. 40, line 23 to col. 42, line 14; e.g., “after one or more face boundaries have been identified, a face may be selected and one or more options 7110 relating to the face may be displayed”); and in response to the first operation is a gesture input operation performed on the electronic device or the first display region of the electronic device, determining an object or region content matching a gesture inputted by the gesture input operation as the target object. Regarding claim 7, Ozog discloses wherein generating the target processing logic based on the first operation and the initial processing logic includes one of: determining display position information of the target multimedia data in the second display region of the electronic device based on the first operation, and processing the initial processing logic based on the display position information and a display parameter of the target multimedia data to obtain the target processing logic; and determining the display position information of the target multimedia data in the second display region of the electronic device based on the first operation, and processing the initial processing logic based on the display position information and a display output parameter of the target multimedia data to obtain the target processing logic (see col. 30, line 48 to col. 31, line 65; e.g., “ each face feature may have a corresponding set of data points/criteria that may be utilized to define that feature. Further, in one embodiment, the features and the positions relative to the other features may be utilized in constructing metadata that defines the face of a particular user (e.g. User A in this example, etc.)”), wherein the display output parameter is obtained based on display configuration information of an output device of target multimedia data for output (see fig 23; e.g., identified/selected faces are output based on preferences). Regarding claim 8, Ozog discloses further including one of: switching multimedia data currently displayed in the first display region based on the first operation to determine multiple streams of target multimedia data from multiple streams of multimedia data; determining a target output mode based on a type of the target multimedia content, and outputting the target multimedia content based on the target output mode (see figs 23 and 31D; e.g., share preferences); and determining an output mode of the target multimedia content based on a third operation on the electronic device and outputting the target multimedia content separately or synchronously according to a corresponding type based on determined output mode. Regarding claim 9, Ozog discloses wherein outputting the target multimedia content obtained by processing the target multimedia data includes one of: displaying and outputting the target multimedia content to a same display screen or different display screens based on a type of the target multimedia content; outputting the target multimedia content to a target output part of the electronic device, wherein the target output part is determined based on attribute information of the target multimedia content; outputting the target multimedia content to a target application; and outputting the target multimedia content to an output device having a target connection with the electronic device (see col. 8, lines 1-39; e.g., share content via Bluetooth between two users). Claims 10-18 are similar in scope to claims 1-9, respectively, and are therefore rejected under similar rationale. Claims 19-20 are similar in scope to claims 1-2, respectively, and therefore rejected under similar rationale. Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Taine et al (US 10,182,204). 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RASHAWN N TILLERY/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Oct 26, 2023
Application Filed
Jan 22, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602701
INTERACTIVE MAP INTERFACE INCORPORATING CUSTOMIZABLE GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12547302
PAGE PRESENTATION METHOD, DISPLAY SYSTEM AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12542871
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12536219
DIGITAL CONTAINER FILE FOR MULTIMEDIA PRESENTATION
2y 5m to grant Granted Jan 27, 2026
Patent 12524138
METHOD AND APPARATUS FOR ADJUSTING POSITION OF VIRTUAL BUTTON, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.6%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month