Prosecution Insights
Last updated: April 19, 2026
Application No. 19/197,501

OBJECT PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
May 02, 2025
Examiner
HONG, RICHARD J
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
459 granted / 589 resolved
+15.9% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
35 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 589 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-17 and 19-21 are pending. Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: OBJECT PROCESSING METHOD BY SEQUENTIALLY SELECTED APPLICATIONS AND APPARAUTS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in China on Nov. 4, 2022. It is noted, however, that applicant has not filed a certified copy of the CN 202211378082.4 application as required by 37 CFR 1.55. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because they do not limit the computer readable medium to non-transitory tangible media. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101, Aug. 24, 2009; p. 2. 1351 Off. Gaz. Pat. Off. 212 (2010). The computer readable medium recited in claim *** encompasses a transitory, propagating signal, which is not a process, machine, manufacture, or composition of matter. Nuijten, 500 F.3d at 1357. The claim “covers material not found in any of the four statutory categories [and thus] falls outside the plainly expressed scope of § 101.” Id. at 1354. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-11, 16-17 and 19-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Quan et al. (CN 104407769 A, IDS, hereinafter English translation by Clarivate Analytics). As to claim 1, Quan discloses an object processing method (Quan, Abs., a “picture processing method”), performed by an electronic device (Quan, FIG. 7, [0107], “device 700 may be a mobile phone”), the method comprising: displaying a first processing interface (Quan, FIG. 3, [0080], e.g., the interface containing “application A, application B and application C”), wherein the first processing interface displays a target object (Quan, see FIG. 3, [0076], e.g., a “picture to be processed of the editing interface”); obtaining at least one application configured to process the target object, and displaying an identifier of each of the at least one application in the first processing interface (Quan, FIG. 3, [0080], e.g., “application A, application B and application C”); sequentially selecting at least one application from the at least one application (Quan, FIG. 2, [0074], “In step 202, the icon of at least one designated image processing application in many image processing applications, image editing function added to the list, the at least one designated image processing application selected by the user”) and processing a corresponding to-be-processed object by each currently-selected application to obtain an object processing result after at least one processing (Quan, FIG. 2, [0084], “In step 205, when any one of the image processing function in functional menu is clicked, transferring the picture processing function corresponding to the processing module to be processed picture for editing processing to obtain the processed picture”); wherein the to-be-processed object (Quan, FIG. 2, [0084], “the processed picture”) corresponding to an application that is firstly selected from the at least one application is the target object (Quan, see FIG. 3, [0076], e.g., a “picture to be processed of the editing interface”); the to-be-processed object corresponding to an application that is not firstly selected from the at least one application is an object obtained after processing the target object for at least once (Quan, e.g., FIG. 5, [0093], “As shown in FIG. 5, application C in the calling application displaying list of the picture to be processed, can be obtained as shown in FIG. 5 As shown the picture after processing. when the terminal detects that the user clicks the operation B in application display list, if right shown in FIG. 5, the application displaying the list display function menu B application. after that, the terminal according to the user performing the click operation on the function menu, invoking corresponding picture processing application for editing processing”). As to claim 2, Quan discloses the object processing method according to claim 1, wherein the sequentially selecting at least one application from the at least one application (Quan, FIG. 2, [0074], “In step 202, the icon of at least one designated image processing application in many image processing applications, image editing function added to the list, the at least one designated image processing application selected by the user”) and processing a corresponding to-be-processed object by each currently-selected application to obtain an object processing result after at least one processing (Quan, FIG. 2, [0084], “In step 205, when any one of the image processing function in functional menu is clicked, transferring the picture processing function corresponding to the processing module to be processed picture for editing processing to obtain the processed picture”), comprises: obtaining the currently-selected application from the at least one application; displaying an application interface of the currently-selected application (Quan, FIG. 2, [0076], “In step 203, when the display operation is detected of at least one designated image processing application, at least the icon of one specified image processing application, to display on the picture to be processed of the editing interface”); processing the to-be-processed object corresponding to the currently-selected application through the application interface (Quan, FIG. 2, [0084], “In step 205, when any one of the image processing function in functional menu is clicked, transferring the picture processing function corresponding to the processing module to be processed picture for editing processing to obtain the processed picture”); performing a next application selection after the currently-selected application completing processing, until the object processing result being obtained (Quan, FIG. 2, [0090], “the process represented by step 204 and step 205 can be repeatedly executed when editing and processing the picture to be processed, that is, can invoke a plurality of specified image processing application show list in the picture to be processed to be processed”). As to claim 3, Quan discloses the object processing method according to claim 2, wherein, the displaying an application interface of the currently-selected application (Quan, FIG. 2, [0076], “In step 203, when the display operation is detected of at least one designated image processing application, at least the icon of one specified image processing application, to display on the picture to be processed of the editing interface”) comprises: switching from displaying the first processing interface to displaying the application interface of the currently-selected application (Quan, e.g., FIG. 4, [0083], “application display C in the application list is clicked, will as right shown in FIG. 4, upper area of the application show list editing interface displays the function menu of the C application”); and the performing a next application selection after the currently-selected application completing processing (Quan, FIG. 2, [0090], “the process represented by step 204 and step 205 can be repeatedly executed when editing and processing the picture to be processed, that is, can invoke a plurality of specified image processing application show list in the picture to be processed to be processed”), comprises: switching, after the processing being completed, from displaying the application interface to displaying the first processing interface to perform the next application selection (Quan, e.g., [0088], “For example, for a Instagram application, the optimal processing function is a filter function, for picture show, optimal processing function is a skin whitening function, dispelling scar and acne removing function, and so on. Of course, any image processing function in the function menu further can be selected for the user manual of the one or more image processing function”). As to claim 4, Quan discloses the object processing method according to claim 3, further comprising: determining, in response to an export operation being performed on the application interface, that the processing is completed, wherein the export operation is configured to save an object processed through the application interface to the electronic device (Quan, FIG. 2, [0095]-[0096], “In step 206, the storage or display of the processed picture”; “it can be by picture 3, picture storage key 4 and FIG. 5 in the processed picture to be stored”). As to claim 5, Quan discloses the object processing method according to claim 1, wherein the obtaining at least one application configured to process the target object (Quan, FIG. 3, [0080], e.g., “application A, application B and application C”) comprises: obtaining at least one application configured to process the target object and a corresponding processing sequence of a plurality of applications of the at least one application; wherein the to-be-processed object corresponding to an application that is not firstly selected from the at least one application is an object output by one of the at least one application which performs the processing earlier than the currently-selected application (Quan, [0088], “For example, for a Instagram application, the optimal processing function is a filter function, for picture show, optimal processing function is a skin whitening function, dispelling scar and acne removing function, and so on”). As to claim 6, Quan discloses the object processing method according to claim 5, wherein the obtaining at least one application configured to process the target object and a corresponding processing sequence of a plurality of applications of the at least one application (Quan, [0088], “For example, for a Instagram application, the optimal processing function is a filter function, for picture show, optimal processing function is a skin whitening function, dispelling scar and acne removing function, and so on”), comprises: displaying a first application selection interface in response to a first trigger operation performed on the first processing interface, wherein the first application selection interface displays to-be-selected applications (Quan, e.g., FIG. 5, [0093], “application C in the calling application displaying list of the picture to be processed, can be obtained as shown in FIG. 5 As shown the picture after processing. when the terminal detects that the user clicks the operation B in application display list, if right shown in FIG. 5, the application displaying the list display function menu B application. after that, the terminal according to the user performing the click operation on the function menu, invoking corresponding picture processing application for editing processing”); obtaining a plurality of applications selected from the to-be-selected applications as the at least one application corresponding to the target object; using a sequence in which the plurality of applications are selected in the first application selection interface as the processing sequence of the plurality of applications (Quan, FIG. 5, [0097], “according to the click operation of the user, invoking at least one appointed picture processing application capable of editing picture in the list, the picture to be processed to perform the editing process, because of the multiple image processing application integrated in the image editing function list. thus, when the picture to be processed by each image processing application for processing, without executing the application, closing the application, repeating step of storing pictures, so it not only has convenient operation, but also the quality of the picture after multiple application processing is not influenced”). As to claim 7, Quan discloses the object processing method according to claim 1, wherein the obtaining at least one application configured to process the target object (Quan, FIG. 3, [0080], e.g., “application A, application B and application C”) comprises: displaying, in response to a second trigger operation performed on the first processing interface, a second application selection interface, wherein the second application selection interface displays a plurality of application combinations (Quan, FIG. 2, [0072], “in the setting of the terminal adds one option " add image processing application. Then, when the user clicks the add option, the terminal will determine a plurality of image processing application itself is installed, and the installation of the image processing application in the form of a list displayed in the display interface of the terminal. In addition, when the application list display, can display icon and name corresponding to each image processing application”); using a plurality of applications comprised in an application combination selected from the plurality of application combinations as the at least one application corresponding to the target object (Quan, FIG. 2, [0057], “ the multiple image processing application integrated in the image editing function list”). As to claim 8, Quan discloses the object processing method according to claim 7, further comprising: generating the plurality of application combinations based on a plurality of applications that were selected in a past; or generating the plurality of application combinations (Quan, FIG. 2, [0057], “ the multiple image processing application integrated in the image editing function list”) based on processing contents corresponding to the target object (Quan, e.g., see FIGS. 3-5, [0073], “wherein the map show skin whitening, acne and scar, elongated legs, striking and cutting picture; better filtration effect of Instagram, but only the picture of the square process, Snapee with many beautiful and lovely small icon and a frame; it can beautify as elements of the picture, Squaready excel the pictures of different processed into square, whether an image stored therein or cutting cut Thotoviva adept element in the picture mosaic shape of different colours”). As to claim 9, Quan discloses the object processing method according to claim 7, further comprising: displaying an application scenario to which each of the plurality of application combinations applies (Quan, e.g., see FIGS. 3-5, [0073], “wherein the map show skin whitening, acne and scar, elongated legs, striking and cutting picture; better filtration effect of Instagram, but only the picture of the square process, Snapee with many beautiful and lovely small icon and a frame; it can beautify as elements of the picture, Squaready excel the pictures of different processed into square, whether an image stored therein or cutting cut Thotoviva adept element in the picture mosaic shape of different colours”). As to claim 10, Quan discloses the object processing method according to claim 1, wherein the displaying an identifier of each of the at least one application in the first processing interface (Quan, FIG. 3, [0080], e.g., “application A, application B and application C”) comprises: displaying the at least one application in the first processing interface based on a processing sequence in which the at least one application processes the target object (Quan, see FIGS. 3-5). As to claim 11, Quan discloses the object processing method according to claim 1, wherein an object type of the target object is an image (Quan, see FIGS. 3-5), the method further comprises: displaying a post-processed object obtained by processing the corresponding to-be-processed object by each of the selected at least one application (Quan, [0092], “first picture processing application included in the at least one specified picture processing application. second image processing application is first other than image processing application of other specified image processing application. second image processing application can be comprised of an image processing application, and may include a plurality of image processing application. When the second picture processing application comprises a plurality of image processing application, the picture can be used in a plurality of image processing application calling a picture processing application to process the first processed image, obtained after processing, and then transfers the other image processing application continue processing the picture of the processed, repeating said step till obtaining When user satisfaction for the picture”). As to claim 16, Quan discloses the object processing method according to claim 1, wherein an object type of the target object is an image, and an object type of the to-be-processed object is an image (Quan, see FIGS. 1-5). As to claim 17, Quan discloses the object processing method according to claim 1, wherein an object type of the target object is a text (Quan, see FIGS. 3-5, it is reasonably inferred that images may contain text) or an audio. As to claim 19, it differs from claim 1 only in that it is the electronic device performing the object processing method of claim 1. It recites substantially the same limitations as in claim 1, and Quan discloses them. Please see claim 1 for detailed analysis. As to claim 20, it differs from claim 1 only in that it is the computer-readable storage medium having program codes stored therein to be executed for performing the object processing method of claim 1. It recites substantially the same limitations as in claim 1, and Quan discloses them. Please see claim 1 for detailed analysis. As to claim 21, it recites substantially the same limitations as in claim 2, and Quan discloses them. Please see claim 2 for detailed analysis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Quan et al. (CN 104407769 A, IDS, hereinafter English translation by Clarivate Analytics). As to claim 12, Quan teaches the object processing method according to claim 11, wherein the at least one application configured to process the target object comprises a plurality of applications, and the plurality of applications have a processing sequence (Quan, FIG. 2, [0057], “ the multiple image processing application integrated in the image editing function list”); and the displaying a post-processed object obtained by processing the corresponding to-be-processed object by each of the selected at least one application (Quan, [0085], “in the function menu of the appointed picture processing is clicked in the step 204 application, any image processing function area is clicked, the terminal calls the image processing function picture to be processed and performs the editing processing”), comprises: displaying, post-processed objects obtained after processing corresponding to-be-processed objects by the plurality of applications (Quan, [0096], “the processed picture is directly displayed in the editing interface”). Quan does not explicitly teach “in a stacked manner; wherein a post-processed object obtained by one of the plurality of applications that performs the processing at a later stage is displayed at a higher layer of a stack”. However, Examiner takes an Official Notice that it is old and well known in the art of image processing to display a plurality of processed images in a stacked manner, wherein newly processed images are displayed at higher layers of the stack. At the time of effective filing date, given that Quan teaches the concept of displaying the plurality of processed images by corresponding image processing application, respectively, it would have been obvious to one of ordinary skill in the art to modify the processed images, i.e., the “post-processed object(s)” to be displayed in a stacked manner, wherein a post-processed object processed at a later stage is displayed at higher layers of the stack, in order to provide that the user can sequentially see the editing results in the order of the stack, which corresponds to the order of the processes performed by respective application. As to claim 14, Quan teaches the object processing method according to claim 1, wherein the displaying a first processing interface comprises: displaying the first processing interface by performing an interface startup operation on the target object displaying interface (Quan, e.g., [0063], “The first icon clicking operation of user, calling the first icon click operation corresponding to the first picture processing application, the picture to be processed to perform the editing process to obtain a first processed image and a first picture processing application included in the at least one designated image processing application”). Quan does not explicitly teach “displaying objects in an object collection; obtaining an object selected from the object collection as the target image and displaying a target object displaying interface corresponding to the target object”. However, Examiner takes an Official Notice that it is old and well known in the art of image processing to select (obtain) an image as a target image while a plurality of images (image collection) are being displayed. At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the step of “calling the first icon click operation corresponding to the first picture processing application, the picture to be processed to perform the editing process to obtain a first processed image” taught by Quan to be followed by the step of selecting an image out of an image collection, in order to easily select a target image by a simple click among many images. As to claim 15, Quan teaches the object processing method according to claim 1, wherein the displaying a first processing interface comprises: displaying the selected target object in the first processing interface (Quan, see FIGS. 3-5). Quan does not explicitly teach “wherein the displaying a first processing interface comprises: displaying a first processing interface that comprises an object adding control; displaying an object selection interface in a case that a click operation performed on the object adding control is detected, wherein the object selection interface displays a to-be-selected object; obtaining the target object selected from the object selection interface”. However, Examiner takes an Official Notice that it is old and well known in the art of image processing that an image processing interface comprises an image adding control by clicking an image and select the image as the target image. At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the step of “displaying the selected target object in the first processing interface” to be followed by performing the target image adding control, i.e., clicking the image, in order to easily select a target image by way of clicking. Allowable Subject Matter Claim 13 would be allowable if rewritten to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As to claim 13, the closest known prior art, i.e., Quan et al. (CN 104407769 A, IDS) and Xiao (CN 103946801 A), alone or in reasonable combination, fails to teach limitations in consideration of the claims as a whole, specifically with respect to the limitations “in response to a dragging operation performed on the post-processed object being detected and the dragging operation satisfying a target condition, displaying the post-processed object receiving the dragging operation side by side with another post-processed object generated earlier than the post-processed object receiving the dragging operation; displaying an enlarged icon of a first application and an enlarged icon of a second application; wherein the first application is an application that outputs the post-processed object receiving the dragging operation, and the second application is an application that outputs the another post- processed object generated earlier than the post-processed object receiving the dragging operation”. Conclusion The prior arts made of record and not relied upon are considered pertinent to applicant’s disclosure: Xiao (CN 103946801 A) teaches the concept of “receiving a plurality of application program icon selected by the user and … a collaborative operation instruction sequence … an execution unit according to the cooperative operation instruction sequence” (Abs.). Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD J HONG whose telephone number is (571) 270-7765. The examiner can normally be reached on 9:00 AM to 6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached on (571) 272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Jan. 21, 2026 /RICHARD J HONG/Primary Examiner, Art Unit 2621 ***
Read full office action

Prosecution Timeline

May 02, 2025
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596398
FLEXIBLE ELECTRONIC DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12578827
DISPLAY SUBSTRATE AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572215
ELECTRONIC DEVICE, AND METHOD FOR PREVENTING/REDUCING MISRECOGNITION OF GESTURE IN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573159
FUTURE POSE PREDICTOR FOR A CONTROLLER
2y 5m to grant Granted Mar 10, 2026
Patent 12566514
TOUCH STRUCTURE HAVING THROUGH HOLES ON OVERLAPPING PARTS AND DISPLAY PANEL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.4%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 589 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month