Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
DETAILED ACTION
This office action is responsive to application No. 18/922,007 filed on 12/29/2025. Claim(s) 1-20 is/are pending and have been examined.
Priority
Based on the pending claim limitation(s) presented. Support for the limitation(s) including, but not limited to “user agreed to participate in a media viewership measurement study”, could not be found in now abandoned parent application 13/552,579.
However, mention of “user agreed to participate in a media viewership measurement study” is found in subsequent parent application 13/831,259 now US Patent 10,034,049.
Therefore, the earliest priority date for the pending application is 03/14/2013.
Claim Objections
Claim(s) 8 is/are objected to because of the following informalities:
Claim 8 recites:
“wherein the description includes a textual description of the event included in the event list.”
There is no prior mention of “the description” in independent claim 1 from which claim 8 depends. However, there is mention of “a description” dependent claim 6 and dependent claim 7.
Please amend to:
--wherein a description includes a textual description of the event included in the event list.--
OR
Amend claim 8 to depend on either claim 6 or claim 7.
Appropriate correction is required.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 6, 8-12, 14-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burger et al. (US 2012/0324494) in view of Conness et al. (US 2014/0176813).
Consider claims 1, 9, and 15, Burger teaches a method, electronic device, and non-transitory computer-readable medium comprising computer executable instructions that, when executed by a processor, cause the processor to perform a method comprising: one or more processors; and memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for (Paragraph 0063-0069):
determining, by a client system, that a user agreed to participate in a media viewership study (Paragraph 0020-0021 teaches collecting sensor data at the video viewing environment sensor. Paragraph 0022 teaches the viewer may elect to participate by opting-in, to provide various information described herein, including emotional response information);
displaying, on the client system, media content for viewing by the user of the client system; an event list associated with the media (Fig.1, Paragraph 0011 teaches viewers shown watching advertisements. Fig.3, Paragraph 0025 teaches a time sequence of various events, scenes, and actions occurring within the advertisement. Where time index 1 to time index T, corresponds to scenes 1 to scene Z in an advertisement);
based on the user agreeing to participate in the media viewership study, gathering, by the client system, physical indicia information associated with the user while viewing the media content (Paragraph 0009 teaches utilizing viewing environment sensors, such as image sensors, depth sensors, acoustic sensors, and potentially other sensors such as motion and biometric sensors. Such sensors may allow systems to identify individuals, detect and understand human emotional expressions, and provide real-time feedback while a viewer is watching video. Based on such feedback, an entertainment system may determine a measure of a viewer's enjoyment of the advertisement, and provide real-time responses to the perceived viewer emotional responses. Paragraph 0020-0021 teaches collecting sensor data at the video viewing environment sensor. Paragraph 0022 teaches the viewer may elect to participate by opting-in, to provide various information described herein, including emotional response information);
correlating the physical indicia information to the events included in the event list (Paragraph 0024 teaches viewer emotional response profile 304 is generated by a semantic mining module 302 running on one or more of media computing device 104 and server computing device 130 using sensor information received from one or more video viewing environment sensors. Using emotional response data from the sensor and also advertisement information 303, e.g., metadata identifying a particular advertisement the viewer was watching when the emotional response data was collected and where in the advertisement the emotional response occurred, semantic mining module 302 generates viewer emotional response profile 304, which captures the viewer's emotional response as a function the time position within the advertisement. Paragraph 0025 teaches semantic mining module 302 assigns emotional identifications to various behavioral and other expression data (e.g., physiological data) detected by the video viewing environment sensors. Semantic mining module 302 also indexes the viewer's emotional expression according to a time sequence synchronized with the advertisement, for example, by times for various events, scenes, and actions occurring within the advertisement. Thus, in the example shown in FIG. 3, at time index 1 of an advertisement, semantic mining module 302 records that the viewer was bored and distracted based on physiological data, e.g., heart rate data, and human affect display data, e.g., a body language score. At later time index 2, viewer emotional response profile 304 indicates that the viewer was happy and interested in the advertisement, while at time index 3 the viewer was scared but her attention was raptly focused on the advertisement);
analyzing an interest of the user in the media content using the physical indicia information; based on the analyzing of the interest of the user, determining that an event category associated with the media content is of interest to the user (Paragraph 0028 teaches in embodiments in which an image sensor is included as a video viewing environment sensor, suitable eye tracking and/or face position tracking techniques may be employed, potentially in combination with a depth map of the video viewing environment, to determine a degree to which the viewer's attention is focused on the display device and/or the advertisement. Paragraph 0025 teaches indexing the viewer's emotional expression according to a time sequence synchronized with the advertisement, for example, by times for various events, scenes, and actions occurring within the advertisement. Thus, in the example shown in FIG. 3, at time index 1 of an advertisement, semantic mining module 302 records that the viewer was bored and distracted based on physiological data, e.g., heart rate data, and human affect display data, e.g., a body language score. At later time index 2, viewer emotional response profile 304 indicates that the viewer was happy and interested in the advertisement, while at time index 3 the viewer was scared but her attention was raptly focused on the advertisement. Viewer emotional response profile 304 is based on analysis of various environment sensors that determines user’s emotional interest for the particular event(s) within the advertisement); and
providing the interest of the user in the event category to a server system for inclusion in the media viewership study (Fig.3, Paragraph 0028 teaches an emotional response profile 304 for an advertisement in graphical form at 306. Paragraph 002 teaches receiving for a plurality of advertisements, emotional response profiles from each of a plurality of viewers. Paragraph 0033 teaches aggregating a plurality of emotional response profiles for the advertisements to form an aggregated emotional response profiles for those advertisements. Paragraph 0034 teaches aggregated emotional response profile 314, may help advertisement content creators to identify emotionally stimulating and/or interesting portions of an advertisement for a group of viewers at any suitable level of granularity. Paragraph 0009 teaches emotional responses of viewers to advertisements may be aggregated and fed to advertisement creators. For example, advertisement creators may receive information on campaigns and concepts that inspired viewer engagement with a brand, ads that inspired strong emotional reactions, and aspects of ads that inspired brand affinity by the viewer).
Burger does not explicitly teach receiving a list of events in an event list associated with the media content, an event including an onscreen location for the event;
based on determining that the user is focused on the onscreen location for an event in the event list, analyzing an interest of the user in the media content using the physical indicia information.
In an analogous art, Conness teaches receiving a list of events in an event list associated with media content, an event including an onscreen location for the event (Paragraph 0086 teaches location, size, and/or shape of each display element may be defined using a coordinate system. Data describing location, size, and/or shape of the display elements in multiple video images may be associated with video image data or display element data. Paragraph 0087 teaches data structure of display elements may include a record for each display element. Display element records may contain information identifying which video images the display element is visible in and, for each of these video images, where the display element is located in the image. The video images may be identified using, for example, frame number or time. Paragraph 0088 teaches data structure may include a record for each video image. The video images may be identified using, for example, frame number or time. Paragraph 0091-0094 teaches several display elements that may be displayed on display 312, where display screen may contain different display elements with corresponding boundaries. Paragraph 0095 teaches boundaries may be defined using a coordinate system discussed above, where boundaries may be stored as part of the data structures described above in relation to Fig.6);
based on determining that user is focused on the onscreen location for an event in the event list, analyzing an interest of the user in the media content using the physical indicia information (Paragraph 0096 teaches eye tracker may determine a location of a user’s gaze, i.e., the user’s gaze point, on display 312. Processing circuitry 306 receives data identifying the user's gaze point on display 312, e.g., an (x, y) coordinate indicating the position of the gaze point on display 312, and compares the location of the gaze point on display 312 to boundaries 702 and 704, which may be identified using the same coordinate system, to determine whether the gaze point is inside boundary 702, inside boundary 704, or outside both boundaries. Processing circuitry takes action when it determines that user’s gaze point is inside the boundary 702. Paragraph 0099 teaches Display element 802 corresponds to the left side of the display, which includes the man 602, and display element 804 corresponds to the right side of the display, which includes the boy 604 and the woman 606. Processing circuitry 306 receives data identifying the user's gaze point on display 312 and compares the location of the gaze point on display 312 to boundary 806 to determine whether the gaze point falls to the left side of boundary 806, i.e., on left display element 802, or to the right side of boundary 806, i.e., on right display element 804. Paragraph 0105-0108 teaches based on user’s gaze point falling within a particular display element focus area, determining the level of interest exhibited by the user in the display element(s)).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Burger to include receiving a list of events in an event list associated with media content, an event including an onscreen location for the event; based on determining that user is focused on the onscreen location for an event in the event list, analyzing an interest of the user in the media content using the physical indicia information, as taught by Conness, for the advantage of identifying a part of the screen that the user is viewing, where user’s gaze point may indicate a person or item on the screen that the user is particularly interested in or engaged by (Conness – Paragraph 0001), enabling the system to finely discern user interest regarding different objects displayed onscreen, providing more detailed tracking data.
Consider claims 2, 10, and 16, Burger and Conness teach wherein the method further comprises:
receiving a facial image of the user; and in response to determining that the user agreed to participate in the media viewership study based on the facial image (Burger - Paragraph 0013 teaches video viewing environment sensor system 106 may include any suitable sensors, including but not limited to one or more image sensors, depth sensors, and/or microphones or other acoustic sensors. Paragraph 0021 teaches determining an identity of a viewer in the video viewing environment from the input of sensor data. In some embodiments, the viewer's identity may be established from a comparison of image data collected by the sensor data with image data stored in the viewer's personal profile. For example, a facial similarity comparison between a face included in image data collected from the video viewing environment and an image stored in the viewer's profile may be used to establish the identity of that viewer. Paragraph 0022 teaches the viewer may elect to participate by opting-in), identifying viewership data associated with the user including (i) non-personally identifiable demographic information of the user (Burger - Paragraph 0030), (ii) presence information of the user (Burger - Paragraph 0013, 0015, 0021, 0028), and (iii) information identifying the media content (Burger - Fig.3, Paragraph 0024-0025).
Consider claims 3, 11, and 17, Burger and Conness teach wherein the client system further comprises a presence sensor (Burger - Paragraph 0013 teaches viewing environment sensor system 106 may include any suitable sensors, including but not limited to one or more image sensors, depth sensors, and/or microphones or other acoustic sensors); and
wherein the method further comprises detecting the user of the client system by detecting, via the presence sensor, presence information of the user indicating that the user is in proximity to the client system (Burger - Paragraph 0013, 0015, 0021, 0028).
Consider claims 4, 12, and 18, Burger and Conness teach wherein the presence sensor is a camera device (Burger - Paragraph 0013 teaches viewing environment sensor system 106 may include any suitable sensors, including but not limited to one or more image sensors, depth sensors, and/or microphones or other acoustic sensors. Paragraph 0063 teaches computing system may include user devices such as cameras); and
wherein the method further comprises:
gathering the physical indicia information by analyzing at least one image captured by the camera device; identifying a body position of the user based on the physical indicia information (Burger - Paragraph 0013, 0015); and
determining an interest of the user in the media content based on the body position of the user (Burger - Fig.3, Paragraph 0025 teaches a body language score, as well as attention of the user. Paragraph 0028 teaches determining viewer’s attention based on eye tracking and/or face position tracking techniques).
Consider claims 6, 14, and 20, Burger and Conness teach wherein an event further includes description information that includes at least one category for describing the event; and wherein determining that an event category associated with the media content is of interest to the user further comprises determining that the at least one category for describing the event is of interest to the user (Burger - Paragraph 0028 teaches in embodiments in which an image sensor is included as a video viewing environment sensor, suitable eye tracking and/or face position tracking techniques may be employed, potentially in combination with a depth map of the video viewing environment, to determine a degree to which the viewer's attention is focused on the display device and/or the advertisement. Paragraph 0025 teaches indexing the viewer's emotional expression according to a time sequence synchronized with the advertisement, for example, by times for various events, scenes, and actions occurring within the advertisement. Thus, in the example shown in FIG. 3, at time index 1 of an advertisement, semantic mining module 302 records that the viewer was bored and distracted based on physiological data, e.g., heart rate data, and human affect display data, e.g., a body language score. At later time index 2, viewer emotional response profile 304 indicates that the viewer was happy and interested in the advertisement, while at time index 3 the viewer was scared but her attention was raptly focused on the advertisement. Viewer emotional response profile 304 is based on analysis of various environment sensors that determines user’s emotional interest for the particular event(s) within the advertisement; Conness – Paragraph 0086-0088).
Consider claim 8, Burger and Conness teach wherein the description includes a textual description of the event included in the event list (Burger - Fig.3, Paragraph 0029; Conness – Paragraph 0086-0088).
Claim(s) 5, 13, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burger et al. (US 2012/0324494), in view of Conness et al. (US 2014/0176813), and further in view of Hammond (US 2014/0282645).
Consider claims 5, 13, and 19, Burger and Conness do not explicitly teach wherein the method further comprises discarding the presence information of the user after a predetermined amount of time.
In an analogous art, Hammond teaches method further comprises discarding the presence information of a user after a predetermined amount of time (Paragraph 0019 teaches panelists refer to people who have agreed to have their media exposure monitored. Paragraph 0026 teaches people meter 108 counts the number of audience members. Paragraph 0020 teaches people meter captures an image of the audience and attempts to identify and/or identifies the audience member(s) based on the captured image. Paragraph 0044 teaches detecting images of the panelist 112 and/or other audience members in the monitored area 102. Paragraph 0036 teaches storage of information in a storage device or storage disk for any duration, e.g., for brief instances, for temporarily buffering, and/or for caching of information).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Burger and Conness to include method further comprises discarding the presence information of a user after a predetermined amount of time, as taught by Hammond, for the advantage of effectively managing system resource(s), freeing up data storage systems of unnecessary data, making room for new data, enabling efficient use of system resources.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Burger et al. (US 2012/0324494), in view of Conness et al. (US 2014/0176813), and further in view of Lyndon (US 6,693,648).
Consider claim 7, Burger and Conness teach wherein each event included in the event list further includes a time, and a description (Burger - Fig.3, Paragraph 0023-0025, 0029; Conness – Paragraph 0086-0088), but do not explicitly teach the event list further includes an event identifier, and a duration.
In an analogous art, Lyndon teaches event list further includes an event identifier, and a duration (Figs.7-11, Col 8: line 13 – Col 9: lines 2).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Burger and Conness to include event list further includes an event identifier, and a duration, as taught by Lyndon, for the advantage of providing added information along with the content, enabling for finer context and in-depth processing of data, resulting in more detailed results.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON K LIN whose telephone number is (571)270-1446. The examiner can normally be reached on Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON K LIN/Primary Examiner, Art Unit 2425