Prosecution Insights
Last updated: April 19, 2026
Application No. 18/917,523

INFORMATION PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

Non-Final OA §102§103§112
Filed
Oct 16, 2024
Examiner
FOGG, CYNTHIA M
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
324 granted / 425 resolved
+18.2% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
4 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
21.1%
-18.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 425 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action This Office Action is made in reply to Application 18/917,523 filed 16 October 2024. As originally filed, Claims 1 – 18 are presented for examination. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 101 ([0056], Fig. 1), 102 ([0056], Fig. 1), 200 ([0053]-[0054], Fig. 1), 500 ([0053], [0056], Fig. 1). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to because the specification ([0086] and [0152]) indicates that in Fig. 6C, 601C is an opaque area and 602C is a transparent area but on the drawings, Fig 6C has area 601C labeled as a transparent area and 602C labeled as an opaque area. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 9 is objected to because of the following informalities. • “the media information in information stream” in Claim 9 line 6 appears to be a typo and should apparently be –the media information in the information stream--. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: - an obtaining module configured to obtain in claim Claim15; and - a display module configured to display in Claim 15. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear what is meant by “a playing time corresponding to the promotion video arrived” in Claim 7 lines 3 – 4. It appears to be a literal translation into English from a foreign document and may contain idiomatic errors. Allowable Subject Matter Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if all other rejections were overcome and if Claim 4 were rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 – 3, 5, 9, 13, and 15 – 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kojiro et al., JP 2019028797A (Kojiro) [English translation via Espacenet attached]. In regards to Claim 1, Kojiro discloses an information processing method performed by a terminal device, the method comprising: obtaining an information stream and a promotion video in response to a user trigger operation, the information stream comprising at least one piece of media information and the promotion video comprising at least one material to be recommended (Kojiro: [0013]-[0014], where a terminal device can be a smart device and is used by the user to access a site of content from a distribution server; [0017], where a terminal device receives content from a content distribution server and advertisement content); and displaying, at a first area of an information stream interface, a first part of the promotion video in a presentation mode and displaying, at a second area of the information stream interface, a second part of the promotion video in a transparent mode, so as to enable the information stream interface to be revealed through the second area, the first area and the second area being dynamically changing areas (Kojiro: [0017], where advertisement content is placed in the advertisement area A1; [0018], where surrounding area A2 is a second area of advertisement content; [0019], where the display mode of the surrounding area A2 may be changed; [0020], where the display mode of the surrounding area A2 may be changed by overlapping the transparent image on the surrounding area A2; [0021], where advertisement content C12 is displayed in area A1 simultaneously with display of C11 with the change of the display form of area A2). Regarding Claim 2, Kojiro discloses the method according to claim 1, wherein the method further comprises: closing the promotion video and switching the information stream interface to an information recommend interface in response to a first trigger operation on the first area, wherein the information recommend interface comprises related information of the material (Kojiro: [0024], where when a predetermined operation is performed by the user, the display mode of the terminal device is changed to the original display mode). Regarding Claim 3, Kojiro discloses the method according to claim 1, wherein the method further comprises: closing the promotion video and switching the information stream interface to an information display interface in response to a second trigger operation on the second area, and displaying, in the information display interface, media information corresponding to a trigger position of the second trigger operation in the information stream (Kojiro: [0097]-[0099], when a scroll operation is performed, display mode may be changed and it is determined if the scroll operation has ended and the terminal device determines whether a page transition operation has been performed). Regarding Claim 5, Kojiro discloses the method according to claim 1, wherein the method further comprises: closing the promotion video and switching the information stream interface to an information recommend interface in response to a third trigger operation on any area in the promotion video, wherein the information recommend interface comprises related information of the material (Kojiro: [0025], when the user selects the advertisement content, advertisement content related to the product or service is displayed). Regarding Claim 9, Kojiro discloses the method according to claim 1, wherein the at least one material displayed in the first area meets a matching condition, wherein the matching condition comprises at least one of the following types of materials in a picture of the promotion video: a material having a ratio of an imaging size thereof to a screen size being within a set ratio interval (Kojiro: [0023], where the second display mode is changed if the ratio of the advertisement area A is larger than the predetermined threshold value); a material having an association with the media information in information stream and revealed through the second area (Kojiro: [0020], where the area A2 overlapping the transparent image on page C11); or a material of which a material feature has a similarity with an interest feature of a current login account of the information stream interface. Regarding Claim 13, Kojiro discloses the method according to claim 1, wherein the information stream and the promotion video are drawn by a sub-thread of a processor, and a main thread of the processor is configured to render the information stream interface and the promotion video over at least a partial area of the information stream interface (Kojiro: [0017], where advertisement content is placed in the advertisement area A1; [0018], where surrounding area A2 is a second area of advertisement content; [0019], where the display mode of the surrounding area A2 may be changed; [0020], where the display mode of the surrounding area A2 may be changed by overlapping the transparent image on the surrounding area A2; [0021], where advertisement content C12 is displayed in area A1 simultaneously with display of C11 with the change of the display form of area A2). In regards to Claim 15, Kojiro discloses an information processing apparatus, comprising: an obtaining module configured to obtain an information stream and a promotion video in response to a user trigger operation, the information stream comprising at least one piece of media information and the promotion video comprising at least one material to be recommended (Kojiro: [0013]-[0014], where a terminal device can be a smart device and is used by the user to access a site of content from a distribution server; [0017], where a terminal device receives content from a content distribution server and advertisement content); and a display module configured to display, at a first area of an information stream interface, a first part of the promotion video in a presentation mode and display, at a second area of the information stream interface, a second part of the promotion video in a transparent mode, so as to enable the information stream interface to be revealed through the second area, the first area and the second area being dynamically changing areas (Kojiro: [0017], where advertisement content is placed in the advertisement area A1; [0018], where surrounding area A2 is a second area of advertisement content; [0019], where the display mode of the surrounding area A2 may be changed; [0020], where the display mode of the surrounding area A2 may be changed by overlapping the transparent image on the surrounding area A2; [0021], where advertisement content C12 is displayed in area A1 simultaneously with display of C11 with the change of the display form of area A2). Regarding Claim 16, Kojiro discloses an electronic device comprising: a memory configured to store a computer-executable instruction (Kojiro: [0130], where a program for executing the operation is stored in a computer-readable recording medium); and a processor configured to implement, when executing the computer-executable instruction or a computer program stored in the memory (Kojiro:[0130], where a control device, such as a computer, may execute the program stored in the computer-readable recording medium), the information processing method according to claim 1 (see rejection for Claim 1 above). Regarding Claim 17, Kojiro discloses a non-transitory computer-readable storage medium, having a computer-executable instruction or a computer program stored therein, and the computer-executable instruction or the computer program, when executed by a processor (Kojiro:[0130], where a control device, such as a computer, may execute the program stored in the computer-readable recording medium), implementing the method according to claim 1 (see rejection for Claim 1 above). Regarding Claim 18, Kojiro discloses a computer program product, comprising a computer-executable instruction or a computer program, and the computer-executable instruction or the computer program, when executed by a processor (Kojiro:[0130], where a control device, such as a computer, may execute the program stored in the computer-readable recording medium), implementing the method according to claim 1 (see rejection for Claim 1 above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 6 - 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kojiro in view of Hwang et al., US Pub. 2015/0160853 A1 (hereinafter Hwang). Regarding Claim 6, Kojiro discloses the method according to claim 5. But Kojiro fails to explicitly disclose, wherein the method further comprises: displaying the information recommend interface corresponding to the promotion video in response to a fourth trigger operation for the terminal device that displays the promotion video, wherein the fourth trigger operation is a somatosensory operation. Hwang from a similar endeavor teaches displaying the information recommend interface corresponding to the promotion video in response to a fourth trigger operation for the terminal device that displays the promotion video, wherein the fourth trigger operation is a somatosensory operation (Hwang: Fig. 6 and [0082], where a gesture may be performed for issuing a command performing a screen transition to an advertisement video, for example; [0098], where a mutual screen transition can be enabled by a gesture in the advertisement area). Currently in the art, to view another moving picture while currently viewing a moving picture is very cumbersome, (Hwang: [0007]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kojiro in view of Hwang such that a video transition may be performed with a simple gesture, (Hwang: [0009]). This improves the user experience. Regarding Claim 7, the combined teaching of Kojiro and Hwang discloses the method according to claims 6, wherein the method further comprises: closing the promotion video in response to that a video closing condition is met (Kojiro: [0022], where changing the display mode could be based on a predetermined time, i.e. 1 second to 10 seconds; Hwang: [0074], where an advertisement may close after the in-stream advertisement video completes), wherein the video closing condition comprises any of a playing time corresponding to the promotion video arrived, or a close control in the promotion video triggered (Kojiro: [0022], where changing the display mode could be based on a predetermined time, i.e. 1 second to 10 seconds; Hwang: [0074], where an advertisement may close after the in-stream advertisement video completes). Regarding Claim 8, the combined teaching of Kojiro and Hwang discloses the method according to claim 7, wherein after the closing the promotion video in response to that a video closing condition is met, the method further comprises: loading a video playing interface in the information stream interface, wherein the video playing interface is configured to continuously play the promotion video or another video (Hwang: [0074], where the in-stream advertisement video continues playing through the entire display of the moving picture). Claim(s) 10 - 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kojiro in view of Chunguo et al., CN109272565A (hereinafter Chunguo) [English translation via Espacenet attached]. Regarding Claim 10, Kojiro discloses the method according to claim 1, wherein before the displaying, at a first area of an information stream interface, a first part of the promotion video in a presentation mode, the method further comprises: extracting transparent channel information from the promotion video, wherein the transparent channel information corresponds to the second area (Kojiro: [0121], where a transparent image is overlayed on area A2 and where the transparency of an image is determined). But Kojiro fails to explicitly disclose extracting RGB channel information; coloring the promotion video based on the transparent channel information, to obtain a colored video; and rendering the colored video into a video playing interface, wherein the video playing interface is configured to play the colored video over the information stream interface. Chunguo from a similar endeavor teaches extracting RGB channel information (Chunguo: [0044]-[0047], where the channel can record transparency and RGB value can be acquired); coloring the promotion video based on the transparent channel information, to obtain a colored video, (Chunguo: [0051]-[0053] and [0060]-[0061], where the color information and the transparency information are stored and synthesized); and rendering the colored video into a video playing interface, wherein the video playing interface is configured to play the colored video over the information stream interface (Chunguo: [0062], where processing is performed to implement a transparent animation effect). Because the current animation resources package which consists of multiple pictures is usually very large, the decoding time is long, and it uses a lot of memory which makes the animation playback mode less efficient and the playback performance is poor, (Chunguo: [0007]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kojiro in view of Chunguo such that transparent effect synthesis processing is performed on a first video file to realize a transparent animation effect, (Chunguo: [0023]). Since the video file is used as a carrier of the transparent animation, the method of transparent animation playing with the PNG sequence can greatly reduce the decoding duration, that is, without occupying a large amount of memory the playback of transparent animation can be realized, so the playback mode is more efficient and the playback performance is better (Chunguo: [0023]). Regarding Claim 11, Kojiro discloses the method according to claim 1. But Kojiro fails to explicitly disclose, wherein before the displaying, at a first area of an information stream interface, a first part of the promotion video in a presentation mode, the method further comprises: performing channel information separation on the promotion video, to obtain an RGB channel video and a transparent channel video, wherein the RGB channel video and the transparent channel video have identical picture sizes and video durations; and splicing the RGB channel video and the transparent channel video to obtain an updated promotion video, wherein the updated promotion video is rendered into a video playing interface, and the video playing interface is configured to play the promotion video over the information stream interface. Chunguo from a similar endeavor teaches performing channel information separation on the promotion video, to obtain an RGB channel video and a transparent channel video, wherein the RGB channel video and the transparent channel video have identical picture sizes and video durations (Chunguo: [0044], where a color parameter value and the transparency parameter value in the original picture are obtained; [0083], where since the sizes of the first area and the second area are consistent with the scene picture described, the pixel points are in one-to-one correspondence); and splicing the RGB channel video and the transparent channel video to obtain an updated promotion video, wherein the updated promotion video is rendered into a video playing interface, and the video playing interface is configured to play the promotion video over the information stream interface (Chunguo: [0041], where in order to save the transparency information through the color channel, the color information is stored in each color channel of the area and the other area only includes the transparency information of the original picture; [0042], where a new picture sequence is formed based on each newly generated picture and by performing transparent effect synthesis processing on the first video file, the second video file finally used for realizing the transparent animation effect can be obtained). Because the current animation resources package which consists of multiple pictures is usually very large, the decoding time is long, and it uses a lot of memory which makes the animation playback mode less efficient and the playback performance is poor, (Chunguo: [0007]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kojiro in view of Chunguo such that transparent effect synthesis processing is performed on a first video file to realize a transparent animation effect, (Chunguo: [0023]). Since the video file is used as a carrier of the transparent animation, the method of transparent animation playing with the PNG sequence can greatly reduce the decoding duration, that is, without occupying a large amount of memory the playback of transparent animation can be realized, so the playback mode is more efficient and the playback performance is better (Chunguo: [0023]). Regarding Claim 12, the combined teaching of Kojiro and Chunguo discloses the method according to claim 11, wherein before the splicing the RGB channel video and the transparent channel video to obtain an updated promotion video, the method further comprises: compressing the transparent channel video by: obtaining transparent channel information corresponding to each pixel of each video frame in the transparent channel video (Chunguo: [0044], where transparency parameter value of each pixel in the original picture is obtained; [0063], where transparency information for each pixel of each video frame is obtained); grouping all pixels in each video frame into a plurality of pixel groups, wherein each pixel group comprises a plurality of pixels, and the pixels in the pixel group are adjacent to each other (Chunguo: [0023], where a color parameter value and a transparency parameter value of each pixel in each original picture is obtained and saving the color parameter value in each color channel of the first area in the target picture and storing the transparency parameter value in the second area); and for each pixel group, filling transparent channel information of the plurality of pixels in the pixel group into a transparent channel of a same pixel in the pixel group, so as to obtain a compressed transparent channel video (Chunguo: [0046], where the alpha channel is a special lay for recording transparency information; [0048], the color channels of the second area holds the transparency parameter value; [0052], where the second area is generated based on the transparency parameter value of each pixel in the original picture; [0085], where sequentially combining an obtained series of target video frames to form a second video file which displays the transparent animation effect). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kojiro in view of MEI et al., US Pub. 2011/0075992 A1 (hereinafter Mei). Regarding Claim 14, Kojiro discloses the method according to claim 1, wherein before the displaying, at a first area of an information stream interface, a first part of the promotion video in a presentation mode, the method further comprises: performing feature extraction on each of the materials, to obtain a material feature of the material (Kojiro: [0088], where advertising content could include an object, a moving image, a character, a figure, a symbol, a hyperlink, etc.). But Kojiro fails to explicitly disclose obtaining an object feature of an object using an application, and obtaining a degree of similarity between each material feature and the object feature; sorting each material in descending order based on the degrees of similarity, to obtain a list sorted in a descending order; and selecting, from a top position of the list sorted in the descending order, the at least one material for displaying in the first area. Mei from a similar endeavor teaches obtaining an object feature of an object using an application, and obtaining a degree of similarity between each material feature and the object feature (Mei: [0017], where keywords, text, information from metadata, OCR text module, etc. can be determined for the video. Video advertisement database contains video advertisements that can be overlaid into the video and includes information about each advertisement including keywords, slogans, logos, etc.); sorting each material in descending order based on the degrees of similarity, to obtain a list sorted in a descending order (Mei: [0018], where the advertisement ranking module employs text derived from the original video and the information associated with the advertisements to identify advertisements that are most contextually relevant to each shot of the video. The advertisement ranking module uses visual similarity and keyframes to identify advertisement that are most visually similar to each keyframe of each shot of the video); and selecting, from a top position of the list sorted in the descending order, the at least one material for displaying in the first area (Mei: [0018], where the advertisement ranking module then combines results of the text-based and visual similarity-based selections to establish the most overall relevant advertisement to each shot of the video). Because of the proliferation of digital capture devices and the explosive growth of video-sharing sites as well as the fast and consistently growing online advertising market, there is motivation by the huge business opportunities for video advertising which incorporates advertisements into an online video, (Mei: [0001]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kojiro in view of Mei to determine contextually relevant ads to overlay on a video, (Mei: [0003]). By determining ads that are relevant to the video content, the embedded advertisements are more effective, (Mei: [0022]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lin et al., US Pub. 2014/0317655 A1 teach transparent overlay advertisement which appears at a bottom part of a YouTube video that a user is watching, such that the user may watch the video and the advertisement at the same time, (Abstract). Sankaran et al., US Pub. 2015/0025948 A1 teach that a brand aware ad exchange server would be provided with the ranked list by the ad matching engine, (Sankaran: [0061]). Van Riel, US Pub. 2009/0222510 A1 teach applying a sorting or matching algorithm that is applied can rank available advertisement according to a preference order specified by the web site administrator, advertisement server administrator, individual advertisers or similar entities, (van Riel: [0039]). Examiner’s Note: The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Cynthia M FOGG whose telephone number is (571)272-2741. The examiner can normally be reached Monday-Friday 7:00-3:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at (571)272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CYNTHIA M FOGG/Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Oct 16, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598353
RENDERING A DYNAMIC ENDEMIC BANNER ON STREAMING PLATFORMS USING CONTENT RECOMMENDATION SYSTEMS AND ADVANCED BANNER PERSONALIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593102
METHODS AND APPARATUS TO GENERATE REFERENCE SIGNATURES
2y 5m to grant Granted Mar 31, 2026
Patent 12593104
LEVERAGING EMOTIONAL TRANSITIONS IN MEDIA TO MODULATE EMOTIONAL IMPACT OF SECONDARY CONTENT
2y 5m to grant Granted Mar 31, 2026
Patent 12587710
DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12581166
METHOD OF RECOMMENDING LIVE BROADCASTING ROOM, APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 425 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month