Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/26 has been entered.
Response to Amendment
The Amendment filed January 20 2026 has been entered and considered. Claims 1, 7-8, and 16 have been amended. The new grounds of rejection set forth in the present action were necessitated by Applicants’ claim amendments; accordingly, this action is made final.
Response to Arguments
Applicant's arguments filed 1/20/26 have been fully considered but they are not persuasive.
Applicant argues that the prior art does not disclose the previously added amendments to independent claims 1, 8, and 16, and the newly added amendments to independent claim 16.
Remarks of 1/20/2026 at Pgs. 13-14, 16, 17, and 19. Examiner respectfully disagrees.
Applicant argues (Pgs. 13-14 and 16):
For example, Applicant submits that Hoarty fails to disclose determining pixels values based on information to be inserted, transmitting, to an electronic device, the multimedia data in an RGB color space with the modified second set of pixels using visible light communication, or based on a luminance value and an inter-pixel distance of each pixel of the first set of pixels, selecting a second set of pixels from among the first set of pixels.
Examiner responds:
Hoarty teaches that the pixel values can be modified to "embed additional watermark data (referred to as additional data or binary data)" (Para. 75). Hoarty further discloses that "In the case of a data carrying block location determined by the watermark data" (Para. 96), as well as "an embedding of additional watermark data in a plurality of video frames (e.g., television frames), with the data representing complex messages distributed across the plurality of video frames." (Para. 97). This showcases that the system of Hoarty determines the pixel values based on the watermark data (information) to be inserted. Hoarty further teaches (Para. 69, "Any type of receiving system (e.g., a television receiver system of any type) can then translate Y'CbCr to RGB (or other color space if needed) for the ultimate display of the watermark encoded video frames of the application.") as well as (Para. 115, "The modified at least pixel characteristic of the subset of pixels encodes a set of data into the region of pixels. In some examples, the at least pixel characteristic of the subset of pixels can include at least one of a hue, a saturation, or a lightness of the region of pixels."). Effectively, Hoarty discloses modifying a pixel characteristic of a subset of pixels (second set of pixels based on a first set of pixels) to then translate to RGB for the ultimate display.
Hoarty also teaches to consider a luminance value (Para. 74, "As defined above, the luminance signal (Y') is limited to the range of 16 to 235 out of 0 to 255, which is 86.3% of its total range.”; Para. 114, "the process 1500 includes determining one or more pixel characteristics of the region of pixels. In some cases, the one or more pixel characteristics can include at least one of a hue, a saturation, or a lightness of the region of pixels.") as well as an inter-pixel distance (Fig. 2, shows the usage of inter-pixel distances to define a core and intermediate area, which are interchangeably used as the subset of pixels to be modified (Also See Para. 102, where a look-up table is disclosed to be used for modifying pixels based on their characteristics (inter-pixel distance), and Para. 79, where intermediate values of pixels are disclosed as being used). They then disclose (Para. 115, "the process 1500 includes modifying, based on the one or more pixel characteristics of the region of pixels, at least one pixel characteristic of a subset of pixels from the region of pixels"), where the second set of pixels is based on previously determined pixel characteristics of the region of pixels.
Applicant argues (Pgs. 17 and 19):
For example, Applicant submits that Hoarty fails to disclose a camera configured to capture a set of continuous image frames displayed on an external device, identify a region of interest (ROI) from the continuous image frames, or determining metadata from edges of the display area, based on a Cb component in the edges of the display area where each of the edges of the display area has a fixed pixel width.
Examiner responds:
Hoarty teaches a method to process video frames, where (Para. 120, "The computing device can include any suitable device, such as a display device (e.g., a television), a camera"). Video frame processing indicates continuous image frame capturing. They also disclose (Para. 113, "At block 1504, the process 1500 includes identifying a region of pixels of the video frame"). To identify a specific region in a video frame, a region of interest necessarily is identified. Hoarty further teaches to determine metadata (Para. 100, "An optional (as indicated by the dashed lines) pseudo-random encode area shift engine 1105 can determine a position or location of a pixel region (e.g., one of the positions shown in"; Also see Figs. 5A-5P where the position information at the edges of the determined display area is disclosed). Determining metadata about all pixel regions necessarily includes edge regions. They determine this based on pixel characteristics (Para. 115, "In some examples, the at least pixel characteristic of the subset of pixels can include at least one of a hue, a saturation, or a lightness of the region of pixels."; Para. 102, "The local pixel array analysis engine 1106 can analyze the surrounding pixels for each pixel region of the video frame. Once the subset of pixels (the data carrying block) in a pixel region is determined and the surrounding pixel area around the subset of pixels is examined to determine the pixel characteristics of the surrounding pixels, pixel shift engine 1107 can determine the shift in the pixel data that will be used to encode the data value, either algorithmically or via a LUT or a combination of both, as described above."). See Figs. 5A-E, 5H 5I, 5L, and 5M-P, where the position information method is showcased, and a fixed pixel width is used as well as edge pixel regions being processed. Having a fixed pixel width for specifically the edge pixels would have been a predictable variation of the fixed pixel position processing disclosed by Hoarty.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-8, and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Hoarty (previously cited) in view of Ganesan et al. (NPL, “Satellite Image Segmentation based on YCbCr Color Space”, published 2015, pdf attached).
Regarding claim 1, Hoarty teaches a method of visual content communication (Para. 39, “This application relates generally to embedding data into a digital video signal without visually impairing the signal in order to provide a means for conveying additional information that is typically related to the video in which the data is embedded, or triggering the substitution of alternate content to the viewer.”), the method comprising: receiving multimedia data (Para. 50, “methods… described herein can embed data within one or more regions of a video frame.”; Para. 128, “In some instances, multimodal computing devices can enable a user to provide multiple types of input to communicate with computing device architecture 1600”); identifying a first set of pixels from among a plurality of pixels (Para. 75, “FIG. 1 is a diagram illustrating a video frame 100 (e.g., stored in a video frame buffer) including various regions of pixels (also referred to herein as target regions or areas, pixel regions, or pixel patches), shown as pixel regions 1, 2, 3, 4, 5, 6, 7, through 128.”) of the multimedia data in a YCbCr color space (Para. 69, “With this understanding, the techniques described herein will be described in terms of HSL which is translated to RGB and then translated to Y′CbCr for transmission through a network to receiving device”); based on a luminance value (Para. 74, “As defined above, the luminance signal (Y′) is limited to the range of 16 to 235 out of 0 to 255, which is 86.3% of its total range. Similarly, the Cr and Cb color-difference components (or signals) are limited to 16 to 240 in a range of 255, which is 87.9% of the available range. In some implementations, for compatibility reasons, the range of HSL values used by the techniques described herein can be limited to the lightness range 905 of 86.3%, where the minimum 906 represents 6.3% above black and the maximum 904 represents 94.1% of full-scale of lightness (or 5.9% below peak white).”; Para. 114, “the process 1500 includes determining one or more pixel characteristics of the region of pixels. In some cases, the one or more pixel characteristics can include at least one of a hue, a saturation, or a lightness of the region of pixels.”, See Para. 69, “Hence, one of ordinary skill will recognize that any discussion of color space is absolute between any two systems, allowing any one color space to be directly translated into the other color space without loss and by utilizing simple arithmetic.”, where it is disclosed that lightness and luminous are analogous and translatable) and an inter-pixel distance of each pixel of the first set of pixels (Fig. 2, shows the usage of inter-pixel distances to define a core and intermediate area, which are interchangeably used as the subset of pixels to be modified (Also See Para. 102, where a look-up table is disclosed to be used for modifying pixels based on their characteristics (inter-pixel distance)), selecting a second set of pixels from among the first set of pixels (Para. 115, “the process 1500 includes modifying, based on the one or more pixel characteristics of the region of pixels, at least one pixel characteristic of a subset of pixels from the region of pixels”); generating metadata for the second set of pixels based on an auxiliary content, wherein the auxiliary content is to be added in the multimedia data (Para. 109, “Using the techniques described herein, watermark data can be applied to a digital video signal in order to embed additional data into the video signal.”); generating a modification factor based on binary bits converted from the auxiliary content (Para. 78, “At each pixel patch location, pixels included in the subset of pixels within the defined pixel region are altered to carry one or more binary bits of information”; Para. 75, “The watermarking techniques described herein can embed additional watermark data (referred to as additional data or binary data)”; Para. 76, “…to determine color space values to shift the values of the pixels in the encoded block”, the watermark data is converted into binary data and used to determine a shift (modification factor) of the pixel values); modifying at least one of a Cb component or a Cr component of the second set of pixels using a modification factor based on the metadata (Para. 115, “The modified at least pixel characteristic of the subset of pixels encodes a set of data into the region of pixels. In some examples, the at least pixel characteristic of the subset of pixels can include at least one of a hue, a saturation, or a lightness of the region of pixels.”, As noted above, the example is not limiting. One of ordinary skill would alternate between HSL and YCbCr processing as needed, as disclosed in Para. 69 above; Para. 110, “The substation of a certain television advertisement being broadcast as part of a television program with a different advertisement (e.g., one that has been predetermined to be more relevant for that household)”, The relevance of a modification is predetermined based on the user, indicating usage of a “modification factor”. Also see Para. 102, where a predefined Look-up table is disclosed for use in determining modification); and transmitting to an electronic device the multimedia data in an RGB color space with the modified second set of pixels using visible light communication (Para. 69, “Any type of receiving system (e.g., a television receiver system of any type) can then translate Y′CbCr to RGB (or other color space if needed) for the ultimate display of the watermark encoded video frames of the application.”).
Hoarty does not explicitly disclose wherein the identifying the first set of pixels comprises: obtaining first pixels in which Cb is a dominating component from among the plurality of pixels, obtaining second pixels in which Cr is the dominating component from among the plurality of pixels, and identifying the first set of pixels based on the first pixels and the second pixels. However, they do consider the prevalence of pixel characteristics in their processing techniques (Para. 82).
Ganesan teaches obtaining first pixels in which Cb is a dominating component from among the plurality of pixels (Pg. 39, Col. 1, “The component ‘Cb’ is the chrominance dominated by the blue color”), obtaining second pixels in which Cr is the dominating component from among the plurality of pixels (Pg. 39, Col. 1, “the component ‘Cr’ is the chrominance dominated by red color”), and identifying the first set of pixels based on the first pixels and the second pixels (Fig. 6, reprinted below,
PNG
media_image1.png
148
345
media_image1.png
Greyscale
, shows color segmentation based on a dominant component).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoarty to incorporate the teachings of Ganesan to include obtaining first pixels in which Cb is a dominating component from among the plurality of pixels, obtaining second pixels in which Cr is the dominating component from among the plurality of pixels, and identifying the first set of pixels based on the first pixels and the second pixels. Hoarty already selects pixels for modification based on pixel characteristics in the YCbCr color space. Ganesan teaches that pixels in an image may be segmented and identified based on the dominance or strength of chrominance components (Cb and Cr). One of ordinary skill in the art would recognize that incorporating the chrominance-based segmentation of Ganesan into the pixel selection process of Hoarty would improve the ability of the system to embed the auxiliary data in plain sight, increasing its robustness, as selecting pixels based on dominant chrominance allows modification in regions where change is less perceptible to the human eye, as disclosed by Ganesan (Pgs. 36-37, Cols. 2-1).
Regarding claim 3, Hoarty teaches all of the elements of claim 1, as stated above, as well as wherein the selecting the second set of pixels comprises: determining the luminance value of each pixel in a first set of Cr component pixels and a first set of Cb component pixels (Para. 79, “ In some examples, once the shift in the one or more pixel characteristics (e.g., the pixel-level color information, such as H, S, and/or L, as described in more detail below) is determined for the core area (e.g., the grid location B2) in FIG. 2,”, lightness is determined for the first set of component pixels, where lightness is analogous to luminance. Both Cb and Cr components would be obvious to one of ordinary skill to be interchanged with the method of using HSL characteristics, as described above); selecting intermediate pixels from among the first set of Cr component pixels and the first set of Cb component pixels having a neighboring cell with a luminance value less than a threshold luminance value (Para. 81, “For instance, the surrounding reference area (the white pixels) can be sampled to determine the H, S, and/or L (or other pixel characteristic) of the surrounding pixels. Based on the determined H, S, and/or L values (or other pixel characteristic), the pixels in the core area and the pixels in the intermediate area can be can modulated or altered based.”, See Para. 82, “ In one illustrative example, if the average lightness (L) of the pixels in the surrounding reference area is less than 20% of full scale (e.g., less than 20% of a pixel with a dark color having, for instance, a pixel value of 0), then values of hue (H) of the pixels in the core area and/or intermediate area can be changed by, for instance, 25% and saturation (S) of the pixels can be reduced by, for instance, 50%, creating an encoded pixel area (including the core area with pixel 205 and intermediate area with pixel 203) that is detectably different from the surrounding reference area but not visible to the human eye”, where a non-limiting example is given using a lightness (analogous to luminance) threshold); and selecting the second set of pixels from among the intermediate pixels based on the inter-pixel distance (Para. 102, “The local pixel array analysis engine 1106 can analyze the surrounding pixels for each pixel region of the video frame. Once the subset of pixels (the data carrying block) in a pixel region is determined and the surrounding pixel area around the subset of pixels is examined to determine the pixel characteristics of the surrounding pixels, pixel shift engine 1107 can determine the shift in the pixel data that will be used to encode the data value, either algorithmically or via a LUT or a combination of both, as described above.”, the subset as disclosed is not limiting (See Para. 115), whether it be the intermediate area pixels, core area pixels, or both).
Regarding claim 4, Hoarty teaches all of the elements of claim 3, as stated above, as well as wherein the selecting the second set of pixels based on the inter- pixel distance comprises: determining the inter-pixel distance of each intermediate pixel by checking nearest pixels in a Cr component list or a Cb component list (Para. 102, “The local pixel array analysis engine 1106 can analyze the surrounding pixels for each pixel region of the video frame. Once the subset of pixels (the data carrying block) in a pixel region is determined and the surrounding pixel area around the subset of pixels is examined to determine the pixel characteristics of the surrounding pixels, pixel shift engine 1107 can determine the shift in the pixel data that will be used to encode the data value, either algorithmically or via a LUT or a combination of both, as described above.”, Cr and Cb are interchangeable pixel characteristics as previously disclosed); and selecting a predefined number of pixels having a maximum inter-pixel distance as the second set of pixels (Para. 88, “Read the value for each HSL parameter from the table entry to be applied to the subset of pixels in the pixel region 102 (e.g., to all pixels in the subset of pixels, as shown in FIG. 1, or to the core area and the intermediate area, as shown in FIG. 2) in percent of full-scale of each respective H, S, and L parameter (e.g., the entry in the LUT can indicate a percentage relative to the maximum value of the range, as in the example provided below)”, A maximum range of characteristics (inter-pixel distance) is disclosed to be used for defining the subset to be modified and the modifications to be done).
Regarding claim 5, Hoarty teaches all of the elements of claim 1, as stated above, as well as wherein the generating the metadata for the second set of pixels comprises: determining a plurality of information sets in the auxiliary content (Para. 50, “The additional information can be related to the video in which the data is embedded, can be used to trigger the substitution of alternate content to a viewer of the video, and/or can provide other information.”, Additional information is analogous to the auxiliary content. See Para. 111, “Alternative content could be stored in the memory”, where alternative content is disclosed in memory, with storing it in sets being an obvious choice); mapping pixels from the second set of pixels to the plurality of information sets in the auxiliary content (Para. 78, “At each pixel patch location, pixels included in the subset of pixels within the defined pixel region are altered to carry one or more binary bits of information”); and generating the metadata indicating the mapped pixels for the plurality of information sets (Figs. 5A-5P), wherein the metadata includes information about pixel coordinates for each information set of the auxiliary content and the at least one of the Cb component or the Cr component to be modified (Para. 81, “In some cases, the one or more pixel characteristics can include hue (H), saturation (S), and lightness (L). The H, S, and/or L shifts used to alter the data carrying pixels (e.g., the pixels in the core area, such as pixel 205, and/or the pixels in the intermediate area, such as pixel 203) can be determined to some degree algorithmically”, As stated above, HSL components are analogous to Cb and Cr components).
Regarding claim 6, Hoarty teaches all of the elements of claim 1, as stated above, as well as wherein the modifying the at least one of the Cb component or the Cr component of the second set of pixels using the modification factor comprises: obtaining a predefined modification factor for modifying the metadata of the second set of pixels (Para. 102, “Once the subset of pixels (the data carrying block) in a pixel region is determined and the surrounding pixel area around the subset of pixels is examined to determine the pixel characteristics of the surrounding pixels, pixel shift engine 1107 can determine the shift in the pixel data that will be used to encode the data value, either algorithmically or via a LUT or a combination of both, as described above”; Also see Para. 110 below); and modifying the metadata of the second set of pixels by modifying one of the Cb component or the Cr component using the modification factor based on the metadata. (Para. 115, “The modified at least pixel characteristic of the subset of pixels encodes a set of data into the region of pixels. In some examples, the at least pixel characteristic of the subset of pixels can include at least one of a hue, a saturation, or a lightness of the region of pixels.”, As noted above, the example is not limiting. One of ordinary skill would alternate between HSL and YCbCr processing as needed, as disclosed in Para. 69 above; Para. 110, “The substation of a certain television advertisement being broadcast as part of a television program with a different advertisement (e.g., one that has been predetermined to be more relevant for that household)”, The relevance of a modification is predetermined based on the user, indicating usage of a “modification factor”).
Regarding claim 7, Hoarty teaches all of the elements of claim 1, as stated above, as well as wherein the transmitting, to the electronic device, the multimedia data comprises: converting the modified second set of pixels with the YCbCr color space to the RGB color space; and displaying the multimedia data in the RGB color space with the modified second set of pixels (Para. 69, “With this understanding, the techniques described herein will be described in terms of HSL which is translated to RGB and then translated to Y′CbCr for transmission through a network to receiving device (e.g., a television display, such as an HDTV). Any type of receiving system (e.g., a television receiver system of any type) can then translate Y′CbCr to RGB (or other color space if needed) for the ultimate display of the watermark encoded video frames of the application.”).
Regarding claim 8, the electronic device recites variably the same function as that of claim 1. It is rejected under the same analysis.
Regarding claim 10, the recited elements perform variably the same function as that of claim 3. It is rejected under the same analysis.
Regarding claim 11, the recited elements perform variably the same function as that of claim 4. It is rejected under the same analysis.
Regarding claim 12, the recited elements perform variably the same function as that of claim 5. It is rejected under the same analysis.
Regarding claim 13, the recited elements perform variably the same function as that of claim 6. It is rejected under the same analysis.
Regarding claim 14, the recited elements perform variably the same function as that of claim 7. It is rejected under the same analysis.
Regarding claim 15, the non-transitory computer-readable recording medium (Para. 130, “A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections.”) executes the method of claim 1. It is rejected under the same analysis.
Claim(s) 16-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hoarty et al. in view of Singhal et al. (US Patent Pub. No. 20160343402 A1).
Regarding claim 16, Hoarty teaches an electronic device, for visual content communication, the electronic device comprising: a camera configured to capture a set of continuous image frames displayed on an external device (Para. 120, “The computing device can include any suitable device, such as a display device (e.g., a television), a camera”); a memory storing instructions (Para. 124, “couples various computing device components including computing device memory”); and at least one processor configured to execute the instructions to: identify a region of interest (ROI) from the continuous image frames (Para. 113, “At block 1504, the process 1500 includes identifying a region of pixels of the video frame”), obtain the ROI data in an RGB color space (Para. 62, “On a color display, all signals are used, and the original RGB information is decoded.”), identify a display area in the ROI (Para. 113 above, a ROI is detected, but no specific method is described for this detection), convert data in the display area from the RGB color space to a YCbCr color space (Para. 69, “With this understanding, the techniques described herein will be described in terms of HSL which is translated to RGB and then translated to Y′CbCr”), determine metadata from edges of the determined display area, each of the edges of the determined display area having a fixed pixel width (Para. 100, “An optional (as indicated by the dashed lines) pseudo-random encode area shift engine 1105 can determine a position or location of a pixel region (e.g., one of the positions shown in”; Also see Figs. 5A-5P where the position information at the edges of the determined display area is disclosed, see analysis under “Response to Arguments”), based on a predefined modification factor of a modulation of a Cb component (Para. 115, “In some examples, the at least pixel characteristic of the subset of pixels can include at least one of a hue, a saturation, or a lightness of the region of pixels.”; Para. 102, “The local pixel array analysis engine 1106 can analyze the surrounding pixels for each pixel region of the video frame. Once the subset of pixels (the data carrying block) in a pixel region is determined and the surrounding pixel area around the subset of pixels is examined to determine the pixel characteristics of the surrounding pixels, pixel shift engine 1107 can determine the shift in the pixel data that will be used to encode the data value, either algorithmically or via a LUT or a combination of both, as described above.”), identify a location of a content block from the metadata (Para. 93, “As shown in FIG. 4A, a data carrying block 410 is in a first location in the pixel region of a first frame 401, and is relocated to a second location in the pixel region of a second frame 402 shown in FIG. 4B.”), determine a modification factor based on a change in the Cb component of the content block (Para. 81, “The H, S, and/or L shifts used to alter the data carrying pixels (e.g., the pixels in the core area, such as pixel 205, and/or the pixels in the intermediate area, such as pixel 203) can be determined to some degree algorithmically. For instance, the surrounding reference area (the white pixels) can be sampled to determine the H, S, and/or L (or other pixel characteristic) of the surrounding pixels. Based on the determined H, S, and/or L values (or other pixel characteristic), the pixels in the core area and the pixels in the intermediate area can be can modulated or altered based.”), and obtain an auxiliary content based on binary bits of the modification factor (Para. 96, “In the case of a data carrying block location determined by the watermark data…”; Para. 99, “The encoding system can then receive or obtain watermark data 1102 for a frame. In some cases, a state counter 1103 is initiated to track pseudo-random sequence placement of certain encoded pixels. For example, the state counter 1103 can store pixel modification decisions from the pixel shift engine 1107 (and in some cases from the local pixel array analysis engine 1106 and/or the pseudo random encode area shift engine 1105).”).
Hoarty does not explicitly disclose identifying a display area in the ROI based on a change in luminance in consecutive frames. However, they do determine a position or location of a pixel region using an unspecified method (Para. 100).
Singhal teaches to identify a display area in the ROI based on a change in luminance in consecutive frames (Para. 17, “That is, photographic inputs can be analyzed to identify portions of the input associated with an attribute of interest, for example, as indicated by a user or as a default for monitoring… To detect change or transition of color or luminance between frames, color or luminance can be detected based on changes across frames, such as a pair of consecutive frames in an input video.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoarty to incorporate the teachings of Singhal to include identifying a display area in the ROI based on a change in luminance in consecutive frames. Hoarty teaches a system where a display area is detected as a region of interest and those pixels are used in further processing, however no specific method is provided to detect the region. Singhal provides a method where portions of the input can be analyzed to detect attributes of interest, with a change in luminance between frames being explicitly disclosed as a method for detection. One of ordinary skill in the art would recognize the advantage of implementing the method of Singhal into the system of Hoarty, given that the region being detected is a display screen which is predisposed to having changes in luminance across different frames.
Regarding claim 17, Hoarty as modified above teaches all of the elements of claim 16, as stated above, as well as wherein the at least one processor is further configured to execute the instructions to: extract the Cb component and a Cr component of pixels from the ROI (Para. 81, “In some cases, the one or more pixel characteristics can include hue (H), saturation (S), and lightness (L).”, The pixel characteristics are not limited, Cb and Cr are an interchangeable pixel characteristic, as described above); obtain the predefined modification factor (Para. 84, “In some examples, a pixel-value look-up table (LUT) might be used where HSL values are used to look up in a key/value table a recommended substitute value of HSL for the data carrying pixels (in the core area and/or in the intermediate area).”); determine a change in the Cb component and the Cr component in successive frames of the ROI based on the modification factor (Para. 88, “read the value for each HSL parameter from the table entry to be applied to the subset of pixels in the pixel region 102 (e.g., to all pixels in the subset of pixels, as shown in FIG. 1, or to the core area and the intermediate area, as shown in FIG. 2) in percent of full-scale of each respective H, S, and L parameter”); and extract the metadata from the edges of the determined display area based on the change in the Cb component and the Cr component in successive frames of the ROI (Para. 94, “FIG. 5A-FIG. 5P are diagrams illustrating an example of a data carrying block 510 being moved to a multiplicity of locations within the pixel patch area in order to encode multiple bits per pixel patch. As shown, the data encoded pixels of the data carrying block 510 are moved to, in this example, one of 16 positions within the pixel patch area, thus encoding four bits of information.”, Data carrying blocks hold the modified pixels in order to encode additional bits of information. As seen in Figs. 5A-FP, they are at the edges of the display).
Regarding claim 20, Hoarty as modified above teaches all of the elements of claim 16, as stated above, as well as to identify luminance of all the pixels between successive frames; based on the luminance and a luminance factor, detect coordinates of the ROI in successive frames. (Singhal, Para. 17, “That is, photographic inputs can be analyzed to identify portions of the input associated with an attribute of interest, for example, as indicated by a user or as a default for monitoring… To detect change or transition of color or luminance between frames, color or luminance can be detected based on changes across frames, such as a pair of consecutive frames in an input video.”; Para. 43, “Any color or luminance change algorithms can be employed to detect such changes between frames. Color or luminance changes can be detected in accordance with analyzing any aspect of frames, such as an edge of the frame, a center of the frame, an average or median of multiple points in a frame, etc”, An average or medium of multiple points (all pixels) in a frame is determined, which necessarily is compared to a factor to identify whether it is a ROI).
Claim(s) 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Hoarty et al. as modified in view of Singhal et al. above with respect to Claim 16, further in view of Lundbæk (US Patent Pub. No. 20220171874 A1, filed 11/30/2020).
Regarding claim 18, Hoarty as modified teaches all of the elements of claim 16, as stated above, as well as filtering the modified second set of pixels to obtain user specific data (Para. 111, “The watermark data can act as a trigger for a process in a smart TV to substitute alternative content (e.g., a video, an alternative ad, content from the Internet, or other alternative content) based on demographic and/or user data provided to the TV”, The encoded data is filtered based on user data); select at least one format and at least one medium for displaying the user specific data (Para. 111 above, showcases a format and medium for display); and display the user specific data on the at least one medium (Para. 111 above, showcases a medium for displaying user specific data).
They do not explicitly disclose generating a user association map based on a plurality of interactions of a user with a plurality of entities. However they do take user data/demographics into account when determining what additional information or content is to be displayed.
Lundbæk teaches to generate a user association map based on a plurality of interactions of a user with a plurality of entities (Para. 170, “A center of interest embodies context data representing a location in the context mapping space 600 that corresponds to a context a user profile is likely to be interested in. The center of disinterest may be determined based on user-specific interest data, including without limitation previously-selected search results, biographical information, previous search queries, and/or the like.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoarty in view of Singhal to incorporate the teachings of Lundbæk to include the generation of a user association map based on a plurality of interactions of a user with a plurality of entities. The method of Hoarty discloses providing alternative content or information to a user based on data that has been previously determined. One of ordinary skill in the art would recognize the advantage of creating a user association map for this purpose, as disclosed by Lundbæk, to allow the system to provide more specific content that aligns with a user’s preferences.
Regarding claim 19, Hoarty as modified above teaches all of the elements of claim 18, as stated above, as well as teaching wherein the at least one medium is an Internet of Things (IoT) device that is selected using an IoT state map (Hoarty, Para. 69, “In some cases, the receiving device can receive and process the video data (e.g., a set-top box, a server, or the like), and can provide the processed video data to another receiving device (e.g., a mobile device, smart television, or the like)”, Smart TV is understood as an IoT device).
Allowable Subject Matter
Claims 2 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A WAMBST whose telephone number is (703)756-1750. The examiner can normally be reached M-F 9-6:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ALEXANDER WAMBST/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698