DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
2. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claim 24 is rejected under 35 U.S.C. 101 because, the claim recites “a computer readable medium…” and the claimed “…medium…” may be transitory
Claim Rejections - 35 USC § 102
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
5. Claim(s) 1-9, 11, 12, 15-17, 19, 21, 23, 24 and 26 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by YANG et al (2022/0239988).
As to claims 1, 2 and 4-6, YANG discloses a display method and apparatus for item information device and further discloses a display method or electronic device, comprising a processor and a memory; the memory configured to store instructions or computer program; and processor configured to execute the instructions or the program in the memory: to cause the electronic device to execute the display method, comprising:
Displaying (figs.1-6, Server or Display Device “SDD”) a video page(figs.1-13, [0041-0051] and [0063-0077]); displaying an information recommendation page, in response to a trigger operation for a target control on the video playing page, wherein a recommended information set (objects or items: two or more types: e.g., object bear: types of bears) corresponding to an object for reference is displayed on the information recommendation page; the object for reference is determined according to follow state representation information of at least one first candidate object corresponding to the video playing page (Bear, picture flame, baseball cap and other items or objects within the streaming page) the recommended information set comprises account attribute information of several candidate accounts (see fig.3-6), objects to be displayed under each of the candidate accounts comprise a target display object, an object category of the target display object is same as an object category corresponding to the recommended information set (figs.6, relevant tags or product categories are formed: e.g., target object: brown bear corresponds to the recommended information set: live streaming product 12; a cap may also be specific to baseball cap, etc.); and the object category corresponding to the recommended information set is same as an object category of the object for reference (figs.6, relevant tags or product categories are formed: e.g., target object: brown bear corresponds to the recommended information set: live streaming product 12; a cap may also be specific to baseball cap, skipping rope, headphones, etc.); see [0008-0015], [0041-0052], [0063-0077], [0081-0093], [0112-0155] and [0189-0191]), tag item may be tag items recommended (two or more: e.g., clothes and hangers that carry the clothes, cosmetics, etc. with link(s) controls superimposed on the stream image; recommended item may generate multiple levels of other items) pop ups a link or notification and timestamps) are displayed within the live streaming interface image forming plurality of regions of specific pixels region(s); the terminal records a timestamp of the live images in the stream interface and transmits to the server and the server obtains based on the timestamps the live stream image from the ached and performs other item recognition on the image and establishes communication to retrieve additional data streams bases on the timestamps and other triggers; historical operation behavior information (user interaction using voice prompts) and video explanation information; if the historical operation behavior information of a first target object among the at least one first candidate object meets a first preset condition, then the object for reference is determined according to the first target object; if none of the historical operation behavior information of the respective first candidate objects meets the first preset condition, then the object for reference is determined according to video explanation information of the at least one first candidate object and wherein if none of the historical operation behavior information of the respective first candidate objects meets the first preset condition, and video explanation information of a second target object among the at least one first candidate object meets a second preset condition, then the object for reference is determined according to the second target object ([0041-0052], [0063-0077], [0081-0093], [0112-0155] and [0189-0191]), user response is communicated to the terminal and/or server; perform item recognition to generate information of the target item
As to claim 4-6, YANG further discloses wherein the historical operation behavior information comprises a click count; the first preset condition is a click count of the first target object being greater than a preset count threshold, wherein the second preset condition comprises: the video explanation information of the second target object being that the second target object is in a state of being explained on the video playing page; or, the second preset condition comprises: the video explanation information of the second target object being that the second target object is in a state of having been explained on the video playing page, and an explanation completion moment of the second target object is later than an explanation completion moment of other object in the state of having been explained on the video playing page and wherein the follow state representation information comprises historical operation behavior information; and after displaying the video playing page, updating historical operation behavior information of a third target object, in response to a trigger operation for the third target object among the at least one first candidate object corresponding to the video playing page ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), server or user terminal, keeps track of user interaction, within a predetermined period signs to the user to say something or request via voice prompt to the user to interact for additional information, text, description, image, etc; Server or Display Device “SDD” includes item recognition module (IRM) and timestamps link information and the streaming image(s) accordingly to further retrieve additional related information, upon receiving an input or trigger, generates an item link region or window, sort at least one item link with a recommendation index, ranks the target position to display the link region or window within a location outside the streaming live item, superimposed within the live streaming image with a first, second, etc., transparency where the first transparency is higher than the second; furthermore, the phrase “say something” generates multiple levels of windows superimposed based on the received information; wherein the first content is displayed as a picture-in- picture overlaid on the first window in a position to avoid obscuring the at least a portion of the second content of the first window (generating multiple levels of superimposed windows);
As to claims 7-9, YANG further discloses object display information of the target display object under each of the candidate accounts and wherein an object to be classified is the object for reference or the target display object; an object category of the object to be classified is determined according to object description information of the object to be classified and a pre-built object classification model; and the object classification model is used for classifying the object description information of the object to be classified and wherein the video playing page is used for displaying a first live video; and/or, an information display page corresponding to the candidate account is used for displaying a second live video ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), the system keeps track of user interaction, within a predetermined period signs to the user to say something or request via voice prompt to the user to interact for additional information, text, description, image, etc.
As to claims 11-12, YANG further discloses wherein the information recommendation page comprises several recommended information sets, and each of the recommended information sets corresponds to a different object category; each of the recommended information sets comprises the account attribute information of the several candidate accounts, each of the objects to be displayed under each candidate account in a same recommended information set comprises the target display object, and the object category of the target display object is same as an object category corresponding to the each of the recommended information sets and displaying an information display page corresponding to the target account, in response to a trigger operation for the account attribute information of the target account ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), note remarks in claims 1-3.
As to claim 15, YANG further discloses wherein the each of the recommended information sets further comprises object display information of the target display objects under several candidate accounts; upon the information recommendation page displaying the object display information of the target display object under the target account among the several candidate accounts, the method further comprises: displaying the information display page corresponding to the target account, and displaying an information display interface corresponding to the target display object under the target account on the information display page corresponding to the target account, in response to a trigger operation for the object display information ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), note remarks in claims 1-3.
As to claims 16-17, YANG further discloses wherein the information recommendation page further comprises several category tags; there is one-to-one correspondence between the several category tags and the several recommended information sets; and the recommended information set being currently displayed on the information recommendation page is a recommended information set corresponding to a first tag among the several category tags; the method further comprises: switching the recommended information set displayed on the information recommendation page from the recommended information set corresponding to the first tag to a recommended information set corresponding to a second tag, in response to a trigger operation for the second tag among the several category tags and wherein there is a target reference object that meets a preset condition among the at least one first candidate object corresponding to the video playing page; the object category corresponding to the recommended information set is same as an object category of the target reference object; the preset condition is: a video explanation state of the target reference object on the video playing page meeting a preset state condition; the preset condition is: historical operation information of the target reference object meeting a preset operation condition ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), the system keeps track of user interaction, within a predetermined period signs to the user to say something or request via voice prompt to the user to interact for additional information, text, description, image, etc.
As to claims 19 and 21, YANG further discloses wherein the object category of the target display object is determined according to object description information of the target display object and a pre-built object classification model; the object classification model is used for classifying the object description information of the target display object; and the object classification model is trained by using object description information of at least one the object for reference and classification annotation information of the at least one object for reference and wherein the account attribute information comprises at least one of an identifier icon corresponding to the candidate account and an identifier text corresponding to the candidate account ([0063-0077], [0081-0093], [0112-0155] and [0189-0191]), note remarks in claims 1-2.
As to claim 23, the claimed “An electronic device…” is composed of the same structural elements that were discussed with respect to claims 1-2.
As to claim 24, the claimed “A computer readable medium…” is composed of the same structural elements that were discussed with respect to claims 1-2.
Claim 26 is met as previously discussed in claims 1-2.
Conclusion
6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNAN Q SHANG whose telephone number is (571)272-7355. The examiner can normally be reached Monday-Friday 7-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRUCKART BENJAMIN can be reached on 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNAN Q SHANG/Primary Examiner, Art Unit 2424
ANNAN Q. SHANG