Prosecution Insights
Last updated: April 19, 2026
Application No. 18/567,643

METHOD, APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM FOR IMAGE DISPLAY

Non-Final OA §103
Filed
Dec 06, 2023
Examiner
LEE, BENEDICT E
Art Unit
2665
Tech Center
2600 — Communications
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
92 granted / 106 resolved
+24.8% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
16 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
31.8%
-8.2% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 106 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. § 119 (a)-(d). The certified copy has been filed in parent Application No. CN202111409329.X, filed on 11/19/2021. Claim Objections Claim 21 is objected to because of the following informalities: claim 21 relies on claim 9, which Applicant cancelled. Appropriate correction is required. A series of singular dependent claims is permissible in which a dependent claim refers to a preceding claim which, in turn, refers to another preceding claim. A claim which depends from a dependent claim should not be separated by any claim which does not also depend from said dependent claim. It should be kept in mind that a dependent claim may refer to any preceding independent claim. In general, applicant's sequence will not be changed. See MPEP § 608.01(n). Therefore, Examiner determined that claim 21 is not valid to be a subject matter in this office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1–5, 8, 14–18, and 22–25 are rejected under 35 U.S.C. § 103 as being unpatentable over Song et al. (U.S. 11,442,274 B2) in view of Nishiyama et al. (U.S. 10,372,297 B2). Regarding claim 1, Song discloses a method for image display, comprising: obtaining preview information corresponding to a plurality of multimedia contents, (Per Fig. 2, Song’s processor 250 computes a graphic object in multimedia content based on images. Song col. 10 lines 25–42. The graphic object is referred to as various terms such as a visual object, a graphical object, and a controller object. According to an embodiment, the graphic object includes images of various shapes.) wherein the preview information comprises a plurality of superimposed displayed images; (Per Fig. 2, Song’s processor 250 discloses that a background image and the graphic object are superimposed to render multimedia content. Id. col. 25 lines 27–41. [t]he processor 250 displays the tennis racket on a location corresponding to the external electronic device 202 in the background images associated with the tennis court or tennis stadium, so that the background image and/or the graphic object are superimposed.) presenting a plurality of content cards on a target page (a plurality of content cards on a target page construed as various shapes after processing user input information1), wherein the preview information is displayed on the content cards. (Per Fig. 2, Song discloses that the graphical object, where various shapes include, is represented in a display 230. Id. col. 10 lines 25–42. [t]he graphic object includes images of various shapes.) Song fails to specifically disclose in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and controlling the respective images to be moved based on the corresponding movement information in the respective content cards, while controlling the content cards to be moved based on the triggering operation. In related art, Nishiyama discloses in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and (Per Fig. 7, Nishiyama discloses movement tracking based on calculation where the symbols 14D of the cards 10 are set up in a live map such that information is retraced in chronological order. Nishiyama col. 12 lines 25–36. The display controller 161 displays the symbols 14D allocated to the respective users 105 at the calculated coordinate values. Displaying a live map based on constantly changing position information enables the movement of the users 105 to be displayed in real time.) controlling the respective images (controlling the respective images construed as detecting a selection operation by a user) to be moved based on the corresponding movement information in the respective content cards, (Per Fig. 15, Nishiyama’s detector 122 determines whether selection option cards 10 are tracked by a user. Id. col. 11 lines 1–22. The detector 122 detects a selection operation by a user 105 with respect to the selection option cards 10 displayed on the first display device 130.) while controlling the content cards to be moved based on the triggering operation. (Per Fig. 15, with his display controller 121, Nishiyama discloses that movement occurs to notify users between a change-card and the surrounding selection option cards 10. Id. col. 10 lines 54–67. [s]ince movement occurs on the screen when enlarging display of the change-card, it is easy to attract the attention of the users 105 to the change-card and the surrounding selection option cards 10, and the way in which the display mode has changed becomes easier to comprehend.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Nishiyama into the teachings of Song to effectively understand user’s desires such that information is evaluated correctly. Id. col. 1 lines 40–50. Regarding claim 14, Song discloses a computer device, comprising: a processor (Fig. 2, 250 a processor), a memory (Fig. 2, 210 a memory), and a bus (Fig. 2, 270 a communication module), the memory having machine-readable instructions stored thereon which are executable by the processor, wherein the processor communicates with the memory through the bus when the computer device is running, and the machine-readable instructions, when executed by the processor, perform acts comprising: obtaining preview information corresponding to a plurality of multimedia contents, (Per Fig. 2, Song’s processor 250 computes a graphic object in multimedia content based on images. Song col. 10 lines 25–42. The graphic object is referred to as various terms such as a visual object, a graphical object, and a controller object. According to an embodiment, the graphic object includes images of various shapes.) wherein the preview information comprises a plurality of superimposed displayed images; (Per Fig. 2, Song’s processor 250 discloses that a background image and the graphic object are superimposed to render multimedia content. Id. col. 25 lines 27–41. [t]he processor 250 displays the tennis racket on a location corresponding to the external electronic device 202 in the background images associated with the tennis court or tennis stadium, so that the background image and/or the graphic object are superimposed.) presenting a plurality of content cards on a target page (a plurality of content cards on a target page construed as various shapes after processing user input information), wherein the preview information is displayed on the content cards. (Per Fig. 2, Song discloses that the graphical object, where various shapes include, is represented in a display 230. Id. col. 10 lines 25–42. [t]he graphic object includes images of various shapes.) Song fails to specifically disclose in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and controlling the respective images to be moved based on the corresponding movement information in the respective content cards, while controlling the content cards to be moved based on the triggering operation. In related art, Nishiyama discloses in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and (Per Fig. 7, Nishiyama discloses movement tracking based on calculation where the symbols 14D of the cards 10 are set up in a live map such that information is retraced in chronological order. Nishiyama col. 12 lines 25–36. The display controller 161 displays the symbols 14D allocated to the respective users 105 at the calculated coordinate values. Displaying a live map based on constantly changing position information enables the movement of the users 105 to be displayed in real time.) controlling the respective images (controlling the respective images construed as detecting a selection operation by a user) to be moved based on the corresponding movement information in the respective content cards, (Per Fig. 15, Nishiyama’s detector 122 determines whether selection option cards 10 are tracked by a user. Id. col. 11 lines 1–22. The detector 122 detects a selection operation by a user 105 with respect to the selection option cards 10 displayed on the first display device 130.) while controlling the content cards to be moved based on the triggering operation. (Per Fig. 15, with his display controller 121, Nishiyama discloses that movement occurs to notify users between a change-card and the surrounding selection option cards 10. Id. col. 10 lines 54–67. [s]ince movement occurs on the screen when enlarging display of the change-card, it is easy to attract the attention of the users 105 to the change-card and the surrounding selection option cards 10, and the way in which the display mode has changed becomes easier to comprehend.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Nishiyama into the teachings of Song to effectively understand user’s desires such that information is evaluated correctly. Id. col. 1 lines 40–50. Regarding claim 22, Song discloses a non-transitory computer-readable storage medium, wherein the non- transitory computer-readable storage medium has a computer program stored thereon that, when executed by a processor, performs acts comprising: obtaining preview information corresponding to a plurality of multimedia contents, (Per Fig. 2, Song’s processor 250 computes a graphic object in multimedia content based on images. Song col. 10 lines 25–42. The graphic object is referred to as various terms such as a visual object, a graphical object, and a controller object. According to an embodiment, the graphic object includes images of various shapes.) wherein the preview information comprises a plurality of superimposed displayed images; (Per Fig. 2, Song’s processor 250 discloses that a background image and the graphic object are superimposed to render multimedia content. Id. col. 25 lines 27–41. [t]he processor 250 displays the tennis racket on a location corresponding to the external electronic device 202 in the background images associated with the tennis court or tennis stadium, so that the background image and/or the graphic object are superimposed.) presenting a plurality of content cards on a target page (a plurality of content cards on a target page construed as various shapes after processing user input information), wherein the preview information is displayed on the content cards. (Per Fig. 2, Song discloses that the graphical object, where various shapes include, is represented in a display 230. Id. col. 10 lines 25–42. [t]he graphic object includes images of various shapes.) Song fails to specifically disclose in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and controlling the respective images to be moved based on the corresponding movement information in the respective content cards, while controlling the content cards to be moved based on the triggering operation. In related art, Nishiyama discloses in response to a triggering operation on the target page, determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards; and (Per Fig. 7, Nishiyama discloses movement tracking based on calculation where the symbols 14D of the cards 10 are set up in a live map such that information is retraced in chronological order. Nishiyama col. 12 lines 25–36. The display controller 161 displays the symbols 14D allocated to the respective users 105 at the calculated coordinate values. Displaying a live map based on constantly changing position information enables the movement of the users 105 to be displayed in real time.) controlling the respective images (controlling the respective images construed as detecting a selection operation by a user) to be moved based on the corresponding movement information in the respective content cards, (Per Fig. 15, Nishiyama’s detector 122 determines whether selection option cards 10 are tracked by a user. Id. col. 11 lines 1–22. The detector 122 detects a selection operation by a user 105 with respect to the selection option cards 10 displayed on the first display device 130.) while controlling the content cards to be moved based on the triggering operation. (Per Fig. 15, with his display controller 121, Nishiyama discloses that movement occurs to notify users between a change-card and the surrounding selection option cards 10. Id. col. 10 lines 54–67. [s]ince movement occurs on the screen when enlarging display of the change-card, it is easy to attract the attention of the users 105 to the change-card and the surrounding selection option cards 10, and the way in which the display mode has changed becomes easier to comprehend.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the teachings of Nishiyama into the teachings of Song to effectively understand user’s desires such that information is evaluated correctly. Id. col. 1 lines 40–50. Regarding claim 2, Song as modified by Nishiyama, discloses the method, wherein the preview information comprises a background image and at least one foreground image; wherein the at least one foreground image is superimposed on the background image, and the at least one foreground image is superimposed in a predetermined hierarchical order. (Per Fig. 7, Nishiyama discloses movement tracking based on calculation where the symbols 14D of the cards 10 are set up in a live map such that information is retraced in chronological order. Nishiyama col. 12 lines 25–36. [m]ovement tracking of the user 105 may be acquired by retracing past position information in chronological order.) Regarding claim 3, Song as modified by Nishiyama, discloses the method, wherein the determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards comprises: determining a first movement distance corresponding to the triggering operation; (Per Fig. 7, Song discloses a predefined distance based on his electronic device 202 location detecting speed change. Song col. 24 line 50 – col. 25 line 10. The processor 250 stops displaying the graphic object when the predicted location of the external electronic device 202 is outside a predefined distance based on the information about the tilt information of the external electronic device 202 and/or the information about the speed change of the external electronic device 202.) and for the respective images in the preview information, determining second movement distances corresponding to the respective images based on movement distance calculation modes corresponding to the respective images and the first movement distance. (Per Fig. 2, Song discloses second location information among a plurality of images by his camera 240. Id. col. 12 line 54 – col. 13 line 3. The second location information is obtained by the most recent image including the external electronic device 202 among a plurality of images acquired by the camera 240.) Regarding claim 4, Song as modified by Nishiyama, discloses the method, wherein the determining, based on the triggering operation, movement information corresponding to respective images in the preview information in the respective content cards comprises: determining, based on a triggering direction of the triggering operation, movement directions corresponding to the respective images wherein the controlling the respective images to be moved based on the corresponding movement information in the respective content cards comprises: (Per Fig. 2, Song’s processor 250 determines a 3D representation of the object in the tilted direction. Song col. 11 line 38 – col. 12 line 7. [t]he processor 250 determines the tilted direction or angle in three dimensions by referring to the gyro sensor data indicating at least one of a pitch,) determining a movement time corresponding to the triggering operation; (Per Fig. 2, Song’s processor 250 updates a display of the object at a deviation time. Id. col. 14 lines 50–57. [t]he processor 250 changes a display of the graphic object based at least on information on the location of the external electronic device 202 at a deviation time obtained using the camera 240,) determining movement speeds corresponding to the respective images based on the movement directions corresponding to the respective images and the movement time; and (Per Fig. 7, Song discloses a predefined distance based on his electronic device 202 location detecting speed change. Id. col. 24 line 50 – col. 25 line 10. The processor 250 stops displaying the graphic object when the predicted location of the external electronic device 202 is outside a predefined distance based on the information about the tilt information of the external electronic device 202 and/or the information about the speed change of the external electronic device 202.) Regarding claim 5, Song as modified by Nishiyama, discloses the method, wherein the determining, based on a triggering direction of the triggering operation, movement directions corresponding to the respective images comprises: determining property information (property information construed as a region in an image) corresponding to the respective images; and (Per Fig. 6, Song discloses a region in an image 610. Song col. 7 lines 25–43. [a]n area of the predetermined size corresponds to an area included in the image 610.) determining the movement directions corresponding to the respective images based on the triggering direction of the triggering operation and the property information corresponding to the respective images. (Per Fig. 2, Song’s processor 250 determines a 3D representation of the object in the tilted direction. Id. col. 11 line 38 – col. 12 line 7. [t]he processor 250 determines the tilted direction or angle in three dimensions by referring to the gyro sensor data indicating at least one of a pitch,) Regarding claim 8, Song as modified by Nishiyama, discloses the method, wherein in accordance with a determination that the preview information displayed in any of the content cards comprises a dynamic image, the method further comprises: playing the dynamic image, while moving the content card to a target position region of a screen interface. (Per Fig. 7, Nishiyama discloses a layout diagram 14C using different symbols 14D to track a movement 14E. Nishiyama col. 6 lines 30–38. [t]he current position of each user 105 is displayed on the layout diagram 14C using different symbols 14D (a circle, cross, triangle, and square in the example of FIG. 7) for each user 105… movement tracking 14E (the dotted arrow in the example of FIG. 7) of the users 105 may also be displayed.) Regarding claim 15, it has been rejected in the same manner as claim 2. Regarding claim 16, it has been rejected in the same manner as claim 3. Regarding claim 17, it has been rejected in the same manner as claim 4. Regarding claim 18, it has been rejected in the same manner as claim 5. Regarding claim 23, it has been rejected in the same manner as claim 2. Regarding claim 24, it has been rejected in the same manner as claim 3. Regarding claim 25, it has been rejected in the same manner as claim 4. Allowable Subject Matter Claims 6–7, and 19–20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yerli (U.S. 2016/0260254 A1) discloses a method of providing digital content on an electronic device. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENEDICT LEE whose telephone number is (571)270-0390. The examiner can normally be reached 10:00-16:00 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R. Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENEDICT E LEE/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665 1 Per Applicant’s para. ¶0100, a content card is a shape in a display form to represent an object in an image.
Read full office action

Prosecution Timeline

Dec 06, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567243
METHOD FOR OPTIMIZING DATA TO BE USED TO TRAIN OBJECT RECOGNITION MODEL, METHOD FOR BUILDING OBJECT RECOGNITION MODEL, AND METHOD FOR RECOGNIZING AN OBJECT
2y 5m to grant Granted Mar 03, 2026
Patent 12561958
METHOD OF TRAINING SEMICONDUCTOR PROCESS IMAGE GENERATOR
2y 5m to grant Granted Feb 24, 2026
Patent 12561215
GRAPH MACHINE LEARNING FOR CASE SIMILARITY
2y 5m to grant Granted Feb 24, 2026
Patent 12548170
METHOD, DEVICE AND SYSTEM FOR REAL-TIME MULTI-CAMERA TRACKING OF A TARGET OBJECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541999
METHOD FOR EMOTION RECOGNITION BASED ON HUMAN-OBJECT TIME-SPACE INTERACTION BEHAVIOR
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+14.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 106 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month